Using Deep Learning for Demand Forecasting with Amazon SageMaker
This project provides an end-to-end solution for Demand Forecasting task using a new state-of-the-art Deep Learning model LSTNet available in GluonTS and Amazon SageMaker.
The input data is a multi-variate time-series.
An example includes hourly electricity consumption of 321 users over the period of 41 months. Here is a snapshot of the normalized data
We have provided example of how to feed your time-series data with GluonTS in the notebook. To convert CSV data or other formats to GluonTS format, please see the customization.
For example, we can estimate the hourly electricity consumption of 321 users for the coming week.
We have implemented LSTNet which is a state-of-the-art Deep Learning model and is available in GluonTS.
Running the solution end-to-end costs less than $5 USD. Please make sure you have read the cleaning up part here.
Demand forecasting uses historical time-series data to help streamline the supply-demand decision-making process across businesses. Examples include predicting the number of
The status quo approaches for time-series forecasting include:
These methods often require tedious data preprocessing and features generation prior to model training. One main advantage of Deep Learning (DL) methods such as LSTNet is automating the feature generation step prior to model training such as incorporating various data normalization, lags, different time scales, some categorical data, dealing with missing values, etc. with better prediction power and fast GPU-enabled training and deployment.
Please check out our blog post for more details.
You will need an AWS account to use this solution. Sign up for an account here.
To run this JumpStart 1P Solution and have the infrastructure deploy to your AWS account you will need to create an active SageMaker Studio instance (see Onboard to Amazon SageMaker Studio). When your Studio instance is Ready, use the instructions in SageMaker JumpStart to 1-Click Launch the solution.
The solution artifacts are included in this GitHub repository for reference.
Note: Solutions are available in most regions including us-west-2, and us-east-1.
Caution: Cloning this GitHub repository and running the code manually could lead to unexpected issues! Use the AWS CloudFormation template. You'll get an Amazon SageMaker Notebook instance that's been correctly setup and configured to access the other resources in the solution.
cloudformation/
deep-demand-forecast.yaml
: The root cloudformation nested stack which creates the AWS stack for this solutiondeep-demand-forecast-sagemaker-notebook-instance.yaml
: Creates SageMaker notebook instancedeep-demand-forecast-permissions.yaml
: Manages all the permission necessary to launch the stackdeep-demand-forecast-endpoint.yaml
: Creates demo endpoint using in demo.ipynb
solution-assistant
: Deletes the created resources such as endpoint, S3 bucket etc. during cleanupsrc/
preprocess/
container/
: To build and register the preprocessing ECR job
Dockerfile
: Docker container configbuild_and_push.sh
: Build and push bash scripts used in deep-demand-forecast.ipynb
requirements.txt
: Dependencies for preprocess.py
container_build/
: Uses CodeBuild
to the build the container for ECRpreprocess.py
: Preprocessing scriptdeep_demand_forecast/
: Contains the train and inference code
train.py
: SageMaker train codeinference.py
: SageMaker inference codedata.py
: GluonTS
data preparationmetrics.py
: A training metricmonitor.py
: Preparing results for visualizationutils.py
: Helper functionsrequirements.txt
: Dependencies for SageMaker MXNet Estimatordemo.ipynb
: Demo notebook to quickly get some predictions from the demo endpointdeep-demand-forecast.ipynb
: See belowdeep-demand-forecast.ipynb
Offer?The notebook trains an LSTNet estimator on electricity consumption data which is multivariate time-series dataset capturing the electricity consumption (in kW) with 15min frequency from 2011-01-01 to 2014-05-26. We compare the model performance by visualizing the metrics MASE vs. sMAPE.
Finally, we deploy an endpoint for the trained model and can interactively compare its performance by comparing the train, test data and predictions.
For example, here, re-training with more epochs would be helpful to increase the model performance and we can re-deploy.
Here is architecture for the end-to-end training and deployment process
Here is the architecture of the inference
When you've finished with this solution, make sure that you delete all unwanted AWS resources. AWS CloudFormation can be used to automatically delete all standard resources that have been created by the solution and notebook. Go to the AWS CloudFormation Console, and delete the parent stack. Choosing to delete the parent stack will automatically delete the nested stacks.
Caution: You need to manually delete any extra resources that you may have created in this notebook. Some examples include, extra Amazon S3 buckets (to the solution's default bucket), extra Amazon SageMaker endpoints (using a custom name), and extra Amazon ECR repositories.
To use your own data, please take a look at
This project is licensed under the Apache-2.0 License.