OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
The OpenVINO™ Runtime can infer models on different hardware devices. This section provides the list of supported devices.
|CPU||Intel CPU||openvino_intel_cpu_plugin||Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)|
|ARM CPU||openvino_arm_cpu_plugin||Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices|
|GPU||Intel GPU||openvino_intel_gpu_plugin||Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics|
|GNA||Intel GNA||openvino_intel_gna_plugin||Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor|
|VPU||Myriad plugin||openvino_intel_myriad_plugin||Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X|
OpenVINO™ Toolkit also contains several plugins which simplify loading models on several hardware devices:
|Auto||openvino_auto_plugin||Auto plugin enables selecting Intel device for inference automatically|
|Auto Batch||openvino_auto_batch_plugin||Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user|
|Hetero||openvino_hetero_plugin||Heterogeneous execution enables automatic inference splitting between several devices|
|Multi||openvino_auto_plugin||Multi plugin enables simultaneous inference of the same model on several devices in parallel|
OpenVINO™ Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
The latest documentation for OpenVINO™ Toolkit is available here. This documentation contains detailed information about all OpenVINO components and provides all the important information you may need to create an application based on binary OpenVINO distribution or own OpenVINO version without source code modification.
Developer documentation contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
The list of OpenVINO tutorials:
The system requirements vary depending on platform and are available on dedicated pages:
See the OpenVINO Wiki to get more information about the OpenVINO build process.
See CONTRIBUTING for details. Thank you!
Report questions, issues and suggestions, using:
* Other names and brands may be claimed as the property of others.