Arm - ComputeLibrary - is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies. Intro
Arm - Arm NN - is the most performant machine learning (ML) inference engine for Android and Linux, accelerating ML on Arm Cortex-A CPUs and Arm Mali GPUs.
Baidu - Paddle Lite - is multi-platform high performance deep learning inference engine.
DeepLearningKit - is Open Source Deep Learning Framework for Apple's iOS, OS X and tvOS.
Edge Impulse - Interactive platform to generate models that can run in microcontrollers. They are also quite active on social netwoks talking about recent news on EdgeAI/TinyML.
Intel - OpenVINO - Comprehensive toolkit to optimize your processes for faster inference.
JDAI Computer Vision - dabnn - is an accelerated binary neural networks inference framework for mobile platform.
Meta - PyTorch Mobile - is a new framework for helping mobile developers and machine learning engineers embed PyTorch ML models on-device.
Microsoft - DeepSpeed - is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Microsoft - ELL - allows you to design and deploy intelligent machine-learned models onto resource constrained platforms and small single-board computers, like Raspberry Pi, Arduino, and micro:bit.
Microsoft - ONNX RUntime - cross-platform, high performance ML inferencing and training accelerator.
Nvidia - TensorRT - is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
OAID - Tengine - is a lite, high performance, modular inference engine for embedded device