Openvino Versions Save

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

2024.0.0

1 month ago

Summary of major features and improvements  

  • More Generative AI coverage and framework integrations to minimize code changes.

    • Improved out-of-the-box experience for TensorFlow* sentence encoding models through the installation of OpenVINO™ toolkit Tokenizers.
    • OpenVINO™ toolkit now supports Mixture of Experts (MoE), a new architecture that helps process more efficient generative models through the pipeline.
    • JavaScript developers now have seamless access to OpenVINO API. This new binding enables a smooth integration with JavaScript API.
    • New and noteworthy models validated: Mistral, StableLM-tuned-alpha-3b, and StableLM-Epoch-3B.
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • Improved quality on INT4 weight compression for LLMs by adding the popular technique, Activation-aware Weight Quantization, to the Neural Network Compression Framework (NNCF). This addition reduces memory requirements and helps speed up token generation.
    • Experience enhanced LLM performance on Intel® CPUs, with internal memory state enhancement, and INT8 precision for KV-cache. Specifically tailored for multi-query LLMs like ChatGLM. ​
    • Easier optimization and conversion of Hugging Face models – compress LLM models to INT8 and INT4 with Hugging Face Optimum command line interface and export models to OpenVINO format. Note this is part of Optimum-Intel which needs to be installed separately.
    • The OpenVINO™ 2024.0 release makes it easier for developers, by integrating more OpenVINO™ features with the Hugging Face* ecosystem. Store quantization configurations for popular models directly in Hugging Face to compress models into INT4 format while preserving accuracy and performance.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • A preview plugin architecture of the integrated Neural Processor Unit (NPU) as part of Intel® Core™ Ultra processor is now included in the main OpenVINO™ package on PyPI.
    • Improved performance on ARM* by enabling the ARM threading library. In addition, we now support multi-core ARM platforms and enabled FP16 precision by default on MacOS*.
    • Improved performance on ARM platforms using throughput hint, which increases efficiency in utilization of CPU cores and memory bandwidth.​
    • New and improved LLM serving samples from OpenVINO™ Model Server for multi-batch inputs and Retrieval Augmented Generation (RAG).

Support Change and Deprecation Notices

  • Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using Discontinued features, you will have to revert to the last LTS OpenVINO version supporting them. For more details, refer to the OpenVINO Legacy Features and Components page.
  • Discontinued in 2024.0:
    • Runtime components:
      • Intel® Gaussian & Neural Accelerator (Intel® GNA). Consider using the Neural Processing Unit (NPU) for low-powered systems like Intel® Core™ Ultra or 14th generation and beyond.
      • OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API transition guide for reference).
      • All ONNX Frontend legacy API (known as ONNX_IMPORTER_API)
      • 'PerfomanceMode.UNDEFINED' property as part of the OpenVINO Python API
    • Tools:
  • Deprecated and to be removed in the future:
    • The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from installation options and distribution channels beginning with OpenVINO 2025.0.
    • Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using OpenVINO Model Converter (API call: OVC) instead. Follow the model conversion transition guide for more details.
    • OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning).
    • OpenVINO Model Server components:
      • Reshaping a model in runtime based on the incoming requests (auto shape and auto batch size) is deprecated and will be removed in the future. Using OpenVINO’s dynamic shape models is recommended instead.

You can find OpenVINO™ toolkit 2024.0 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community: @rghvsh @YaritaiKoto @Abdulrahman-Adel @jvr0123 @sami0i @guy-tamir @rupeshs @karanjakhar @abhinav231-valisetti @rajatkrishna @lukazlim @siddhant-0707 @tiger100256-hu

Release documentation is available here: https://docs.openvino.ai/2024 Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2024-0.html

2023.3.0

2 months ago

Summary of major features and improvements  

  • More Generative AI coverage and framework integrations to minimize code changes.

    • Introducing OpenVINO Gen AI repository on GitHub that demonstrates native C and C++ pipeline samples for Large Language Models (LLMs). String tensors are now supported as inputs and tokenizers natively to reduce overhead and ease production. ​
    • New and noteworthy models validated; Mistral, Zephyr, Qwen, ChatGLM3, and Baichuan.
    • New Jupyter Notebooks for Latent Consistency Models (LCM) and Distil-Whisper. Updated LLM Chatbot notebook to include LangChain, Neural Chat, TinyLlama, ChatGLM3, Qwen, Notus, and Youri models.
    • Torch.compile is now fully integrated with OpenVINO, which now includes a hardware 'options' parameter allowing for seamless inference hardware selection by leveraging the plugin architecture in OpenVINO. ​
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • As part of the Neural Network Compression Framework (NNCF), INT4 weight compression model formats are now fully supported on Intel® Xeon® CPUs in addition to Intel® Core™ and iGPU, adding more performance, lower memory usage, and accuracy opportunity when using LLMs.​
    • Improved performance of transformer-based LLM on CPU and GPU using stateful model technique to increase memory efficiency where internal states are shared among multiple iterations of inference. ​
    • Easier optimization and conversion of Hugging Face models – compress LLM models to INT8 and INT4 with Hugging Face Optimum command line interface and export models to OpenVINO format. Note this is part of Optimum-Intel which needs to be installed separately.
    • Tokenizer and TorchVision transform support is now available in the OpenVINO runtime (via new API) requiring less preprocessing code and enhancing performance by automatically handling this model setup. More details on Tokenizers support in the Ecosystem section.
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • Full support for 5th Gen Intel® Xeon® Scalable processors (codename Emerald Rapids)
    • Further optimized performance on Intel® Core™ Ultra (codename Meteor Lake) CPU with latency hint, by leveraging both P-core and E-cores.​
    • Improved performance on ARM platforms using throughput hint, which increases efficiency in utilization of CPU cores and memory bandwidth.​
    • Preview JavaScript API to enable node JS development to access JavaScript binding via source code.​ See details below.
    • Improved model serving of LLMs through OpenVINO Model Server. This not only enables LLM serving over KServe v2 gRPC and REST APIs for more flexibility but also improves throughput by running processing like tokenization on the server side.​ More details in the Ecosystem section.

Support Change and Deprecation Notices

  • The OpenVINO™ Development Tools package (pip install openvino-dev) is deprecated and will be removed from installation options and distribution channels beginning with the 2025.0 release. For more details, refer to the OpenVINO Legacy Features and Components page.
  • Ubuntu 18.04 support is discontinued in the 2023.3 LTS release. The recommended version of Ubuntu is 22.04.
  • Starting with 2023.3 OpenVINO longer supports Python 3.7 due to the Python community discontinuing support. Update to a newer version (currently 3.8-3.11) to avoid interruptions.
  • All ONNX Frontend legacy API (known as ONNX_IMPORTER_API) will no longer be available in the 2024.0 release. 'PerfomanceMode.UNDEFINED' property as part of the OpenVINO Python API will be discontinued in the 2024.0 release.
  • Tools:
    • Deployment Manager is deprecated and will be supported for two years according to the LTS policy. Visit the selector tool to see package distribution options or the deployment guide documentation.
    • Accuracy Checker is deprecated and will be discontinued with 2024.0.  
    • Post-Training Optimization Tool (POT) has been deprecated and the 2023.3 LTS is the last release that supports the tool. Developers are encouraged to use the Neural Network Compression Framework (NNCF) for this feature.
    • Model Optimizer is deprecated and will be fully supported until the 2025.0 release. We encourage developers to perform model conversion through OpenVINO Model Converter (API call: OVC). Follow the model conversion transition guide for more details.
    • Deprecated support for a git patch for NNCF integration with huggingface/transformers. The recommended approach is to use huggingface/optimum-intel for applying NNCF optimization on top of models from Hugging Face.
    • Support for Apache MXNet, Caffe, and Kaldi model formats is deprecated and will be discontinued with the 2024.0 release.
  • Runtime:
    • Intel® Gaussian & Neural Accelerator (Intel® GNA) will be deprecated in a future release. We encourage developers to use the Neural Processing Unit (NPU) for low-powered systems like Intel® CoreTM Ultra or 14th generation and beyond.
    • OpenVINO C++/C/Python 1.0 APIs are deprecated and will be discontinued in the 2024.0 release. Please use API 2.0 in your applications going forward to avoid disruption. OpenVINO property Affinity API will be deprecated from 2024.0 and will be discontinued in 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning).

You can find OpenVINO™ toolkit 2023.3 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community: @rghvsh, @YaritaiKoto, @siddhant-0707, @sydarb, @kk271kg, @ahmadchalhoub, @ma7555, @Bhaskar365

Release documentation is available here: https://docs.openvino.ai/2023.3 Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-lts/2023-3.html

2023.2.0

5 months ago

Summary of major features and improvements  

  • More Generative AI coverage and framework integrations to minimize code changes.

    • Expanded model support for direct PyTorch model conversion – automatically convert additional models directly from PyTorch or execute via torch.compile with OpenVINO as the backend.
    • New and noteworthy models supported – we have enabled models used for chatbots, instruction following, code generation, and many more, including prominent models like LLaVA, chatGLM, Bark (text to audio), and LCM (Latent Consistency Models, an optimized version of Stable Diffusion).
    • Easier optimization and conversion of Hugging Face models – compress LLM models to Int8 with the Hugging Face Optimum command line interface and export models to the OpenVINO IR format.
    • OpenVINO is now available on Conan – a package manager which enables more seamless package management for large-scale projects for C and  C++ developers.
  • Broader Large Language Model (LLM) support and more model compression techniques.

    • Accelerate inference for LLM models on Intel® Core™ CPU and iGPU with the use of Int8 model weight compression.
    • Expanded model support for dynamic shapes for improved performance on GPU.
    • Preview support for Int4 model format is now included. Int4 optimized model weights are now available to try on Intel® Core™ CPU and iGPU, to accelerate models like Llama 2 and chatGLM2.
    • The following Int4 model compression formats are supported for inference in runtime:
      • Generative Pre-training Transformer Quantization (GPTQ); with GPTQ-compressed models, you can access them through the Hugging Face repositories.
      • Native Int4 compression through Neural Network Compression Framework (NNCF).
  • More portability and performance to run AI at the edge, in the cloud, or locally.

    • In 2023.1 we announced full support for ARM architecture, now we have improved performance by enabling FP16 model formats for LLMs and integrating additional acceleration libraries to improve latency.

Support Change and Deprecation Notices

  • The OpenVINO™ Development Tools package (pip install openvino-dev) is deprecated and will be removed from installation options and distribution channels with 2025.0. To learn more, refer to the OpenVINO Legacy Features and Components page. To ensure optimal performance, install the OpenVINO package (pip install openvino), which includes essential components such as OpenVINO Runtime, OpenVINO Converter, and Benchmark Tool.
  • Tools: 
    • Deployment Manager is deprecated and will be removed in the 2024.0 release.
    • Accuracy Checker is deprecated and will be discontinued with 2024.0.   
    • Post-Training Optimization Tool (POT)  is deprecated and will be discontinued with 2024.0. 
    • Model Optimizer is deprecated and will be fully supported up until the 2025.0 release. Model conversion to the OpenVINO IR format should be performed through OpenVINO Model Converter which is part of the PyPI package. Follow the Model Optimizer to OpenVINO Model Converter transition guide for smoother transition. Known limitations are TensorFlow model with TF1 Control flow and object detection models. These limitations relate to the gap in TensorFlow direct conversion capabilities which will be addressed in upcoming releases.
    • PyTorch 1.13 support is deprecated in Neural Network Compression Framework (NNCF).
  • Runtime: 
    • Intel® Gaussian & Neural Accelerator (Intel® GNA) will be deprecated in a future release. We encourage developers to use the Neural Processing Unit (NPU) for low powered systems like Intel® Core™ Ultra or 14th generation and beyond.  
    • OpenVINO C++/C/Python 1.0 APIs will be discontinued with 2024.0. 
    • PyTorch 1.13 support is deprecated in Neural Network Compression Framework (NNCF).

You can find OpenVINO™ toolkit 2023.2 release here:

Acknowledgements

Thanks for contributions from the OpenVINO developer community: @siddhant-0707, @NsdHSO, @mahimairaja, @SANTHOSH-MAMIDISETTI, @rsato10, @PRATHAM-SPS

Release documentation is available here: https://docs.openvino.ai/2023.2 Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-2.html

2023.2.0.dev20230922

6 months ago

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.2.0.dev20230922 pre-release version here:

Release notes are available here: https://docs.openvino.ai/nightly/prerelease_information.html Release documentation is available here: https://docs.openvino.ai/nightly/

What's Changed

  • CPU runtime:
    • Optimized Yolov8n and YoloV8s models on BF16/FP32.
    • Optimized Falcon model on 4th Generation Intel® Xeon® Scalable Processors.
  • GPU runtime:
    • int8 weight compression further improves LLM performance. PR #19548
    • Optimization for gemm & fc in iGPU. PR #19780
  • TensorFlow FE:
    • Added support for Selu operation. PR #19528
    • Added support for XlaConvV2 operation. PR #19466
    • Added support for TensorListLength and TensorListResize operations. PR #19390
  • PyTorch FE:
    • New operations supported
      • aten::minimum aten::maximum. PR #19996
      • aten::broadcast_tensors. PR #19994
      • added support aten::logical_and, aten::logical_or, aten::logical_not, aten::logical_xor. PR #19981
      • aten::scatter_reduce and extend aten::scatter. PR #19980
      • prim::TupleIndex operation. PR #19978
      • mixed precision in aten::min/max. PR #19936
      • aten::tile op PR #19645
      • aten::one_hot PR #19779
      • PReLU. PR #19515
      • aten::swapaxes. PR #19483
      • non-boolean inputs for or and and operations. PR #19268
  • Torchvision NMS can accept negative scores. PR #19826

New openvino_notebooks:

  • Visual Question Answering and Image Captioning using BLIP

Fixed GitHub issues

  • Fixed #19784 “[Bug]: Cannot install libprotobuf-dev along with libopenvino-2023.0.2 on Ubuntu 22.04” with PR #19788
  • Fixed #19617 “Add a clear error message when creating an empty Constant” with PR #19674
  • Fixed #19616 “Align openvino.compile_model and openvino.Core.compile_model functions” with PR #19778
  • Fixed #19469 “[Feature Request]: Add SeLu activation in the OpenVino IR (TensorFlow Conversion)” with PR #19528
  • Fixed #19019 “[Bug]: Low performance of the TF quantized model.” With PR #19735
  • Fixed #19018 “[Feature Request]: Support aarch64 python wheel for Linux” with PR #19594
  • Fixed #18831 “Question: openvino support for Nvidia Jetson Xavier ?” with PR #19594
  • Fixed #18786 “OpenVINO Wheel does not install Debug libraries when CMAKE_BUILD_TYPE is Debug #18786” with PR #19197
  • Fixed #18731 “[Bug] Wrong output shapes of MaxPool” with PR #18965
  • Fixed #18091 “[Bug] 2023.0 Version crashes on Jetson Nano - L4T - Ubuntu 18.04” with PR #19717
  • Fixed #7194 “Conan for simplifying dependency management” with PR #17580

Acknowledgements

Thanks for contributions from the OpenVINO developer community: @siddhant-0707, @PRATHAM-SPS, @okhovan

Full Changelog: https://github.com/openvinotoolkit/openvino/compare/2023.1.0.dev20230811...2023.2.0.dev20230922

2023.1.0

7 months ago

Summary of major features and improvements

  • More Generative AI options with Hugging Face and improved PyTorch model support.
    • NEW: Your PyTorch solutions are now even further enhanced with OpenVINO. You’ve got more options and you no longer need to convert to ONNX for deployment. Developers can now use their API of choice - PyTorch or OpenVINO for added performance benefits. Additionally, users can automatically import and convert PyTorch models for quicker deployment. You can continue to make the most of OpenVINO tools for advanced model compression and deployment advantages, ensuring flexibility and a range of options.
    • torch.compile (preview) – OpenVINO is now available as a backend through PyTorch torch.compile, empowering developers to utilize OpenVINO toolkit through PyTorch APIs. This feature has also been integrated into the Automatic1111 Stable Diffusion Web UI, helping developers achieve accelerated performance for Stable Diffusion 1.5 and 2.1 on Intel CPUs and GPUs in both Native Linux and Windows OS platforms.
    • Optimum Intel – Hugging Face and Intel continue to enhance top generative AI models by optimizing execution, making your models run faster and more efficiently on both CPU and GPU. OpenVINO serves as a runtime for inferencing execution. New PyTorch auto import and conversion capabilities have been enabled, along with support for weights compression to achieve further performance gains.
  • Broader LLM model support and more model compression techniques
    • Enhanced performance and accessibility for Generative AI: Runtime performance and memory usage have been significantly optimized, especially for Large Language models (LLMs). Models used for chatbots, instruction following, code generation, and many more, including prominent models like BLOOM, Dolly, Llama 2, GPT-J, GPTNeoX, ChatGLM, and Open-Llama have been enabled.
    • Improved LLMs on GPU – Model coverage for dynamic shapes support has been expanded, further helping the performance of generative AI workloads on both integrated and discrete GPUs. Furthermore, memory reuse and weight memory consumption for dynamic shapes have been improved.
    • Neural Network Compression Framework (NNCF) now includes an 8-bit weights compression method, making it easier to compress and optimize LLM models. SmoothQuant method has been added for more accurate and efficient post-training quantization for Transformer-based models.
  • More portability and performance to run AI at the edge, in the cloud or locally.
    • NEW: Support for Intel(R) Core(TM) Ultra (codename Meteor Lake). This new generation of Intel CPUs is tailored to excel in AI workloads with a built-in inference accelerators.
    • Integration with MediaPipe – Developers now have direct access to this framework for building multipurpose AI pipelines. Easily integrate with OpenVINO Runtime and OpenVINO Model Server to enhance performance for faster AI model execution. You also benefit from seamless model management and version control, as well as custom logic integration with additional calculators and graphs for tailored AI solutions. Lastly, you can scale faster by delegating deployment to remote hosts via gRPC/REST interfaces for distributed processing.

Support Change and Deprecation Notices

  • OpenVINO™ Development Tools package (pip install openvino-dev) is currently being deprecated and will be removed from installation options and distribution channels with 2025.0. For more info, see the documentation for Legacy Features.
  • Tools:
    • Accuracy Checker is deprecated and will be discontinued with 2024.0.
    • Post-Training Optimization Tool (POT)  has been deprecated and will be discontinued with 2024.0.
  • Runtime:
    • Intel® Gaussian & Neural Accelerator (Intel® GNA) is being deprecated, the GNA plugin will be discontinued with 2024.0.
    • OpenVINO C++/C/Python 1.0 APIs will be discontinued with 2024.0.
    • Python 3.7 will be discontinued with 2023.2 LTS release.

You can find OpenVINO™ toolkit 2023.1 release here:

Release documentation is available here: https://docs.openvino.ai/2023.1 Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-1.html

2023.0.2

7 months ago

This release provides functional bug fixes and capability updates for 2023.0 that enable developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.

Note: This is a standard release intended for developers that prefer the very latest version of OpenVINO. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Visit Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details on the latest LTS releases.

Major changes:

  • OpenVINO GNA Plugin:
    • Fixes the issue when GNA device would not work on Gemini Lake (GLK) platforms
    • Fixes the problem with memory leak during HLK test
  • OpenVINO CPU Plugin:
    • Fixes the issues occurred in Multi-Threading 2.0 getting CPU mapping detail on Windows 7 platforms
  • OpenVINO Core:
    • Fixes the issues occurred when compiling a Pytorch model with unfold op

You can find OpenVINO™ toolkit 2023.0.2 release here:

Release documentation is available here: https://docs.openvino.ai/2023.0/home.html

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html

2023.1.0.dev20230811

8 months ago

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.1.0.dev20230811 pre-release version here:

Release notes are available here: https://docs.openvino.ai/nightly/prerelease_information.html Release documentation is available here: https://docs.openvino.ai/nightly/

What's Changed

  • CPU runtime:
    • Enabled weights decompression support for Large Language models (LLMs). The implementation supports avx2 and avx512 HW targets for Intel® Core™ processors, improving performance in the latency mode (comparison: FP32 VS FP32+INT8 weights). For 4th Generation Intel® Xeon® Scalable Processors (formerly Sapphire Rapids) this INT8 decompression feature improves performance compared to pure BF16 inference. PRs: #18915, #19111
    • Reduced memory consumption of the ‘compile model’ stage by moving constant folding of Transpose nodes to the CPU Runtime side. PR: #18877
    • Set FP16 inference precision by default for non-convolution networks on ARM. Convolution networks will be executed in FP32. PRs: #19069, #19192, #19176
  • GPU runtime: Added paddings for dynamic convolutions to improve performance for models like Stable-Diffusion v2.1, PR: #19001
  • Python API:
    • Added the torchvision.transforms object to OpenVINO preprocessing. PR: #17934
    • All python tools related with OpenVINO are now available via single namespace, to improve user experience by better API readability. PR: #18157
  • TensorFlow FE:
    • Added support for the TensorFlow 1 Checkpoint format. All native TensorFlow formats are now enabled.
    • Added support for 8 new operations:
  • PyTorch FE:
    • Added support for 7 new operations. To know how to enjoy PyTorch models conversion follow Link

New openvino_notebooks

Fixed GitHub issues

  • Fixed #18978 "Webassembly build fails" with PR #19005
  • Fixed #18847 "Debugging OpenVINO Python GIL Error" with PR #18848
  • Fixed #18465 "OpenVINO can't be built in an environment that has an 'ambient' oneDNN installation" with PR #18805

Acknowledgements Thanks for contributions from the OpenVINO developer community: @DmitriyValetov, @kai-waang

Full Changelog: 2023.1.0.dev20230728...2023.1.0.dev20230811

2023.1.0.dev20230728

8 months ago

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.1.0.dev20230728 pre-release version here:

Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html Release documentation is available here: https://docs.openvino.ai/nightly/

2023.0.1

9 months ago

This release provides functional bug fixes and capability updates for 2023.0 that enable developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.

Note: This is a standard release intended for developers that prefer the very latest version of OpenVINO. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Visit Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details on the latest LTS releases.

Major changes:

  • POT:
    • Fixes the errors caused by the default usage of the MMap allocator (enabled in 2023.0). Only Windows affected.
  • OpenVINO Core
    • Fixes the issue with properly handling the directory in read_model() on Windows

You can find OpenVINO™ toolkit 2023.0.1 release here:

Release documentation is available here: https://docs.openvino.ai/2023.0/home.html

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html

2023.1.0.dev20230623

9 months ago

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.1.0.dev20230623 pre-release version here:

Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html Release documentation is available here: https://docs.openvino.ai/nightly/