Generic IoT data simulator. Provides possibility to replay datasets or generates data on fly. Supports various IoT platforms out of the box.
IoT-data-simulator is the tool which allows you to simulate IoT devices data with great flexibility. With this tool you won't need to code another new simulator for each IoT project.
Simulator features that you will like:
If you would like read more about why we created this tool, please read Motivation section.
https://www.youtube.com/playlist?list=PLkiB-GyDBMbfA7AbenLUpvYZKeyuqlBPy
Prerequisites docker (v. 17.05+) and docker-compose should be installed
Startup Run the following commands in the folder with docker-compose.yml and .env files from release folder:
docker-compose pull
docker-compose up
After a while UI will be available by the following url: http://localhost:8090 or http://docker-machine-ip:8090 depending on the OS or docker version.
To use IoT-data-simulator you should understand the following concepts:
session – main composite application entity. Data is being generated and sent to external IoT platforms only when session is running. Session consists of data definition (optional), timer, devices (optional), processing rules and target system;
data definition - describes data structure that is sent to a target system. Consists of dataset, or schema, or dataset and schema.
dataset (required if no schema provided) - .csv or .json file that is uploaded by user to the internal tool object storage (minio) via UI or to the external Amazon S3 service. If dataset is provided, its content is replayed by the tool during session run.
schema (required if dataset is not provided) - describes dataset structure or data that will be generated on runtime. Schema can be derived from dataset or created from scratch via UI constructor. If schema is not provided, data will be generated via custom JS function provided by user.
target system - describes external system where simulated data is sent to;
device – encapsulates specific set of values and target system properties.
Replay existing dataset as is
The simplest use case which can help you to start working with the tool - replay existing dataset as is.
We need to create session which will send dataset records without any modification to a target system. For this use case we don't need data schema - just JS function that will return current dataset record (datasetEntry). Go to the create session screen, then:
function process(state, datasetEntry, deviceName) {
return datasetEntry;
}
Session will stop when all records from dataset will be read.
Replay dataset with updated date/timestamp properties
Replaying dataset without modifying any parameters is not really useful use case, so let's replay dataset with updated date/timestamp properties. Go to create session screen, then:
{"timestamp": 1517303155600, "id": 1 }
{"timestamp": 1517303155800, "id": 2 }
...
Select timer options.
Skip Select devices step
Apply data processing rules. Current time rule is selected by default for date/timestamp property. It means that timestamp property will be updated with current time, but the difference between timestamps of two adjacent dataset records will stay the same. Also, if there are more than one timestamp properties in one record, the first one will have Current time rule by default, others - Relative time rule, which means that timestamps will be updated relatively to Current time timestamp.
Select or create dummy target system
Enter new session name and complete creation flow
Run session and explore session console
Generate data
If you would like to generate data on fly. This can be achieved in two possible ways:
a) With JS function. Go to Create session screen, then:
function process(state, datasetEntry, deviceName) {
return {
timestamp: moment().valueOf()
}
}
b) With schema
Derive schema from dataset and use it without dataset (data generation mode)
This scenario is very useful when dataset has complex structure and you doesn't want to create schema for it from scratch. This can be achieved with the following steps:
Store state between data processing iterations. Implement counter
Let's say we would like to create simple counter with the following payload sent to a target system:
{
"count": 1 // number which increases on each iteration
}
In this case we could use data generatation mode (see Usage section above). On the step #2 create schema with one integer property as shown on the screenshot below:
On step #5 select "Custom function" rule for the "count" property. Open editor and apply the following JS function:
function custom(ruleState, sessionState, deviceName) {
if(typeof ruleState.counter === 'undefined') {
ruleState.counter = 0;
}
ruleState.counter++
return ruleState.counter;
}
Send data to different Kafka topics:
To send data to different Kafka topics we should create several devices which will override target system properties with their "topic data". Lets generate data for our example. Go to Create session screen, then:
Generate dataset and save it to local minio object storage
To generate data and save it as file on local minio object storage create Local storage target system while creating session. Populate dataset field with desired file name.
Inject device property to processing rules
Go to Create session screen, then:
IoT-data-simulator can send payload to the following target systems (with available security types in parentheses):
Q: I've updated session but it still uses old parameters A: Session parameters are updated only when session is stopped. If session was paused, it will still use the previous parameters;
Q: What IoT platforms do you support? A: In our company we used IoT data simulator tool for working with Thingsboard, AWS and Predix platforms. But it doesn't mean that it cannot be used with others.
An IoT data simulator is a required tool in any IoT project. While using real world sensors and devices is required for final integration testing to make sure the system works end to end without any surprises, it is very impractical to use the real devices during development and initial testing phases where tests tend to be quicker, shorter, failing quicker and more often.
Data simulator solves several problems at once:
Written with StackEdit.
Thanks for your interest in contributing!
Get started here link.