It is of potential interest to anyone developing applications on those platforms who has an appetite for performance, scalability, and reliability.
As Netherite is intended to be a drop-in backend replacement, it does not modify the application API. Existing DF and DTFx applications can switch to this backend with little effort. However, we do not support migrating existing task hub contents between different backends.
To get started, you can either try out the sample, or take an existing DF app and switch it to the Netherite backend. You can also read our documentation.
The hello sample.
For a comprehensive quick start on using Netherite with Durable Functions, take a look at hello sample walkthrough, and the associated video content. We included several scripts that make it easy to build, run, and deploy this application, both locally and in the cloud. Also, this sample is a great starting point for creating your own projects.
Configure an existing Durable Functions app for Netherite.
If you have a .NET Durable Functions application already, and want to configure it to use Netherite as the backend, do the following:
Microsoft.Azure.DurableTask.Netherite.AzureFunctionsto your functions project (if using .NET) or your extensions project (if using TypeScript or Python).
"type" : "Netherite"to the
storageProvidersection of your host.json. See recommended host.json settings.
EventHubsConnectionwith the connection string for the Event Hubs namespace. You can do this using an environment variable, or with a function app configuration settings.
Configure an existing Durable Task Application for Netherite.
If you have an application that uses the Durable Task Framework already, and want to configure it to use Netherite as the backend, do the following:
Microsoft.Azure.DurableTask.Netheriteto your project.
NetheriteOrchestrationServiceobject with the required settings, and then pass it as an argument to the constructors of
For more information, see the DTFx sample.
The default Azure Storage engine stores messages in Azure Storage queues and instance states in Azure Storage tables. It executes large numbers of small storage accesses. For example, executing a single orchestration with three activities may require a total of 4 dequeue operations, 3 enqueue operations, 4 table reads, and 4 table writes. Thus, the overall throughput quickly becomes limited by how many I/O operations Azure Storage allows per second.
To achieve better performance, Netherite represents queues and partition states differently, to improve batching:
For some other considerations about how to choose the engine, see the documentation.
The current version of Netherite is 1.4.1. Netherite supports almost all of the DT and DF APIs.
Some notable differences to the default Azure Table storage provider include:
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets Microsoft's Microsoft's definition of a security vulnerability, please report it to us at the Microsoft Security Response Center (MSRC) at https://msrc.microsoft.com/create-report. Do not report security vulnerabilities through GitHub issues.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.