Onnx Versions Save

Open standard for machine learning interoperability

v1.15.0

4 months ago

ONNX v1.15.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Key Updates

  • Added new operators: ImageDecoderhttps://github.com/onnx/onnx/pull/5294 RegexFullMatchhttps://github.com/onnx/onnx/pull/5401 StringConcathttps://github.com/onnx/onnx/issues/5350 StringSplithttps://github.com/onnx/onnx/pull/5371 AffineGridhttps://github.com/onnx/onnx/issues/5225 Geluhttps://github.com/onnx/onnx/issues/5277

  • Updated existing operators: ConstantOfShapehttps://github.com/onnx/onnx/pull/5390 GridSamplehttps://github.com/onnx/onnx/pull/5010 ReduceMaxhttps://github.com/onnx/onnx/pull/5539 ReduceMinhttps://github.com/onnx/onnx/pull/5539 IsNanhttps://github.com/onnx/onnx/pull/5583 IsInfhttps://github.com/onnx/onnx/pull/5583 DFThttps://github.com/onnx/onnx/pull/5514 LabelEncoderhttps://github.com/onnx/onnx/pull/5453

  • New features, bug fixes, and document updates

ai.onnx opset version increased to 20 with following changes:

  • New Operators (ai.onnx):

    • ImageDecoder a new ImageDecoder operator to be used in preprocessing models
    • RegexFullMatch a new operator for regex matching that is commonly used in feature preprocessing
    • StringConcat takes two string tensors as input and returns the elementwise concatenation of the strings in each tensor
    • StringSplit takes a string tensor as input and splits each element based on a delimiter attribute and a maxsplit attribute
    • AffineGrid Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta
    • Gelu applies gaussian error linear unit function or its approximation to input
  • Operator Updates (ai.onnx):

ai.onnx.ml opset version increased to 4 with following changes:

  • Operator Updates (ai.onnx.ml):
    • LabelEncoder adds keys_as_tensor and values_as_tensor attributes

New functionality:

  • Enable empty list of values as attribute PR#5559
  • Update diff bakend node tests for auto update doc PR#5604
  • Enable pylint checks with Ruff and remove pylint from lintrunner PR#5589
  • Getting onnx to treat inf/-inf as float literals. PR#5528
  • Create the onnxtxt serialization format PR#5524
  • Support JSON as a serialization target PR#5523
  • Support for parsing and printing empty list value as attribute PR#5516
  • Add auto update doc pipeline to help developers update docs PR#5450
  • Implement GELU as function op PR#5277
  • Integrate function-inlining with version-conversion PR#5211
  • Extend function type inference to handle missing optional parameters PR#5169
  • Create repr functions for OpSchema PR#5117
  • Utility to inline model-local functions PR#5105
  • Faster reference implementation for operator Conv based on im2col PR#5069
  • Support textproto as a serialization format PR#5112

ONNX now supports serializing to JSON, Text Proto as well as the ONNX Text Representation

Users are now able to serialize the model proto to a text format by specifying supported file extensions or supplying the format= argument in save_model.

For example

# model: onnx.ModelProto
onnx.save_model(model, "model.json")

will save the model as a json file.

Shape inference enhancements

  • [Spec] output_shape for ConvTranspose should not have batch and channels PR#5400
  • Infer rank where reshape shape is inferred PR#5327

Bug fixes and infrastructure improvements

  • Do not use LFS64 on non-glibc linu PR#5669
  • [Web] Use tensor_dtype_to_np_dtype instead of deprecated function PR#5593
  • Reject absolute path when saving external data PR#5566
  • Support Python editable builds PR#5558
  • Test onnxruntime 1.15 with opset 19/IR 9 and fix test source distribution PR#5376
  • Supports float 8 initializers in ReferenceEvaluator PR#5295
  • Fix check_tensor to work with large models on UNIX PR#5286
  • Fix check_tensor to work with large models on Windows PR#5227
  • Transpose scalar shape inference PR#5204
  • Enable RUFF as a formatter PR#5176
  • correct averagepool kernel shape in dilation test case PR#5158
  • Fix type constraints of Reshape(19) PR#5146
  • Add github action to check urls are valid PR#5434 Y
  • Introduce optional cpplint in CI PR#5396 Y
  • Test the serialization API with custom serializers PR#5315 Y
  • [CI] Use ONNX Hub directly in test_model_zoo CI PR#5267 Y
  • Clean up setup.py in favor of pyproject.toml PR#4879 Y

Documentation updates

  • Merge the two contributing docs and create instructions for updating an op PR#5584
  • [Doc] Update README.md regarding Protobuf update and fix typo in Slice-13 spec PR#5435
  • Generate both onnx and onnx-ml operator docs when ONNX_ML=1 PR#5381
  • Publish md files under docs/ to the documentation site PR#5312
  • Update OpSchema docs to include new methods and classes PR#5297
  • Fix missing examples in documentation for ai.onnx.ml PR#5228
  • Modify OneHot operator explanation PR#5197
  • Update CIPipelines.md PR#5157
  • Extend python API documentation PR#5156
  • Update sphinx to create markdown pages for operators PR#5137

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

python setup.py develop deprecation

Direct invocation of setup.py is deprecated following https://setuptools.pypa.io/en/latest/deprecated/commands.html. To build ONNX, users should switch to use

# Editable installation
# Before: python setup.py develop
# Now
pip install -e .

# Build wheel
# Before: python setup.py bdist_wheel
# Now
pip install --upgrade build
python -m build .

Contributors

Thanks to these individuals for their contributions in this release since last 1.15.0 release: @adityagoel4512 @AlexandreEichenberger @andife @AtanasDimitrovQC @BowenBao @cbourjau @ClifHouck @guoyuhong @gramalingam @ilya-lavrenov @jantonguirao @jbachurski @jcwchen @justinchuby @leso-kn @linkerzhang @liqunfu @prasanthpul @slowlyideal @smk2007 @snnn @take-cheeze @xadupre @yuanyao-nv @zhenhuaw-me

v1.14.1

7 months ago

ONNX v1.14.1 is a patch release based on v1.14.1.

Bug fixes

  • Fix shape data propagation function to handle missing optional parameters #5219
  • Fix a couple of shape inference issues #5223
  • Extend function type inference to handle missing optional parameters #5169
  • Fix check_tensor to work with large models on Windows #5227
  • Fix check_tensor to work with large models on UNIX #5286

v1.14.0

10 months ago

ONNX v1.14.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Opset 19 is released

New operators

DeformConv added in https://github.com/onnx/onnx/pull/4783

Operator extensions

Equal - Support for string data type added in https://github.com/onnx/onnx/pull/4828 AveragePool - New attribute dilations https://github.com/onnx/onnx/pull/4790 Pad - Added new wrap to the mode attribute to support circular padding https://github.com/onnx/onnx/pull/4793 Resize - Added half_pixel_symmetric to the coordinate_transformation_mode attribute https://github.com/onnx/onnx/pull/4862

IR updates (bump to 9)

Backend tests

Replaced real models with light models in backend tests. https://github.com/onnx/onnx/pull/4861 https://github.com/onnx/onnx/pull/4960

Support Protobuf v21

Now ONNX supports Protobuf v21: https://github.com/onnx/onnx/pull/4956

Deprecation notice

Installation notice

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Contributors

Thanks to these individuals for their contributions in this release since last 1.13.0 release: @jcwchen, @andife, @gramalingam, @xadupre, @justinchuby, @liqunfu, @yuanyao-nv, @jbachurski, @p-wysocki, @prasanthpul, @jantonguirao, @take-cheeze, @smk2007, @AlexandreEichenberger, @snnn, @daquexian, @linkerzhang.

v1.13.1

1 year ago

ONNX v1.13.1 is a patch release based on v1.13.0.

Bug fixes

  • Add missing f-string for DeprecatedWarningDict in mapping.py #4707
  • Fix types deprecated in numpy==1.24 #4721
  • Update URL for real models from ONNX Runtime #4865
  • Fix attribute substitution within subgraphs during function type/shape inference #4792
  • Handle variants of constant op in shape inference #4824
  • Fix parser bug in handling non-tensor types #4863
  • Fix function shape inference bug #4880

Announcement

  • Deprecate real model tests from onnx repo in next ONNX release #4885
  • Move onnx-weekly package from TestPyPI to PyPI and stop uploading onnx-weekly to TestPyPI after next ONNX release #4930

v1.13.0

1 year ago

ONNX v1.13.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

New operators

Operator extensions

Function updates

Reference Python runtime

Reference Python runtime dependent on only Python and numpy has been added. #4483

Python 3.11 support

ONNX 1.13.0 supports Python 3.11. #4490

Apple Silicon support

Support for M1/M2 ARM processors has been added. #4642

More

ONNX 1.13.0 also comes with numerous:

  • bugfixes
  • infrastructure improvements
  • CI improvements
  • documentation updates
  • security updates

For full details see Logistics for ONNX Release 1.13.0.

Deprecation notice

  • TENSOR_TYPE_TO_STORAGE_TENSOR_TYPE has been deprecated #4270
  • ONNXIFI: ONNX Interface for Framework Integration has been deprecated #4431

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Contributors

Thanks to these individuals for their contributions in this release since last 1.12.0 release: @AnandKri, @cbourjau, @jcwchen, @gramalingam, @garymm, @GaetanLepage, @ilya-lavrenov, @jnovikov, @JackBoosY, @jbachurski, @tjich, @jantonguirao, @justinchuby, @natke, @philass, @prasanthpul, @p-wysocki, @SpaceIm, @stephenneuendorffer,@take-cheeze, @sechkova, @thiagocrepaldi, @xadupre, @mszhanyi, @yuanyao-nv, @andife, @daquexian, @kylesayrs, @liqunfu, @longlee0622, @HSQ79815, @williamberman, @YanBC

The list has been acquired with a script written by Aaron Bockover.

v1.12.0

1 year ago

ONNX v1.12.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Key Updates

ai.onnx opset version increased to 17 with following changes:

  • New operators (ai.onnx): - LayerNormalization (#4076) - SequenceMap (#3892) - Signal Operators: DFT, HannWindow, HammingWindow, BlackmanWindow, MelWeightMatrix, STFT (#3741)
  • Operator Updates (ai.onnx):
    - [Scan] Remove unused type constraint I for newer Scan (opset 9+)(#4012)

Shape inference enhancements

  • Extend InferShapes to expose result of data propagation (#3879)
  • Update shape inference for constant of shape (#4141)
  • Catch missing input type in function shape inference (#4123)
  • Add shape inference for Expand using symbolic shape input (#3789)
  • Fix Expand shape inference: stop rank inference if the shape is symbolic (#4019)

Bug fixes and infrastructure improvements

  • Fix a bug in _get_initializer_tensors() (#4118)
  • Fix bug of resizeShapeInference for Resize13 (#4140)
  • Fix bug in SCE function body (#4038)
  • Use correct pytest types in backend (#3990) (#3994)
  • Checker should validate the node's inputs/outputs have names when its formal parameter is Variadic (#3979)
  • Loose NumPy requirement to grant more flexibility (#4059)
  • Fix crash: Skip unused value_info for version_converter (#4079)
  • Use %d for integer in version_converter (#4182)
  • Extend parser to handle other types (#4136)

Documentation updates

  • Add documentation about functions to IR.md (#4180)
  • Clarify add new op documentation (#4150)
  • Clarify NonZero behavior for scalar input in spec (#4113)
  • Update shape inference documentation (#4163)
  • Fix a minor typo in operator Gather documentation (#4125)
  • Fix typo in CIPipelines.md (#4157)
  • Fix typo in slice doc (#4117)
  • Fix grammar in documents (#4094)
  • Clearer description of Slice (#3908)
  • Add OperatorSetId definition in docs (#4039)
  • Clean up protocol buffer definitions (#4201)
  • Change the wrong words of second layer input (#4044)
  • Clarify that op_type is case sensitive (#4096)

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Notes

  • Beware of the protobuf version gap issue (building onnx with protobuf>=3.12 is not compatible with older protobuf)

Contributors

Thanks to these individuals for their contributions in this release since last 1.11.0 release. (Contributor list obtained with: https://github.com/onnx/onnx/graphs/contributors?from=2022-02-08&to=2022-05-24&type=c): @jcwchen, @gramalingam, @xuzijian629, @garymm, @diyessi, @liqunfu, @jantonguirao, @daquexian, @fdwr, @andife, @wschin, @xadupre, @xkszltl, @snnn

v1.11.0

2 years ago

ONNX v1.11.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Key Updates

ai.onnx opset version increased to 16 with following changes:

  • New Operators (ai.onnx):
  • Operator Updates (ai.onnx):
    • Identity, add optional type support.
    • If, add optional data type support for output.
    • LeakyRelu, add bfloat16 type support.
    • Loop, add optional data type support for initial value and output.
    • PRelu, add bfloat16 type support.
    • RoiAlign, add an attribute coordinate_transformation_mode, correct the default behavior.
    • Scan, add bfloat16 type support for output.
    • ScatterElements, add reduction attribute.
    • ScatterND, add reduction attribute.
    • Where, extend Where op to permit bfloat16 types.
    • GreaterOrEqual, add bfloat16 type support.
    • LessOrEqual, add bfloat16 type support.

ai.onnx.ml opset version increased to 3 with following changes:

New functionality:

Shape inference enhancements

  • Extend optional type inference. #3756
  • Make shape inference handle MapProto. #3772
  • Improve rank inference for Expand op. #3807
  • Enhance shape inference: ParseData/Transpose/QuantizeLinear. #3806
  • Honor existing dim_param in shape inference. #3896
  • Shape inference for functions. #3722
  • Use symbolic input for shape inference of ConstantOfShape. #3784

Bug fixes and infrastructure improvements

  • Use MSVC Runtime as dll for official ONNX Windows release. #3644
  • Simplify common version converter adapter design patterns. #3761
  • Use scalar for OneHot's depth to prevent confusion. #3774
  • Correct wrong subgraph test example for If operator. #3798
  • [Dup] Add SpaceToDepth test cases. #3786
  • Fix error in Pad op convert. #3778
  • Fix some examples for ArgMax. #3851
  • Shape inference should not propagate missing optional outputs. #3815
  • Check negative index for attributes of Slice-1. #3810
  • Cleanup type cast related warnings. #3801
  • Replace whitelist by safelist. #3900
  • Fix weekly/Linux CI failures: correct skip list and remove old numpy related code. #3916
  • Fix old ConvTranspose shape inference and softmax upgrader. #3893
  • Fix Linux i686 Release CI failure due to the latest NumPy. #3918
  • Simplify function definition of context-dependent functions. #3882
  • Migration to using main branch. #3925
  • Append dim even both dim value and param are not set. #3828
  • Bump to 10.15 in AzurePipeline because 10.14 was deprecated. #3941
  • Six: remove all references. #3926
  • For issue 3849 to confirm that type check is performed during checker. #3902
  • Remove testing ort-nightly for Mac Python 3.6 due to unsupported ort-nightly. #3953
  • Mypy: update to 0.760 and remove vendored protobuf stubs. #3939
  • Upgrade Windows version in AzurePipeline since 2017 was dep. #3957
  • Version converter for Softmax should not produce empty shape. #3861
  • Fix Cppcheck warning about memset on NULL backend_ids. #3970
  • Bug fix of extractor which misses local functions. #3954
  • Add bfloat16 type to a few ops missing it. #3960

Documentation updates

  • ONNX Hub Docs. #3712
  • Clarify definition of a tensor in IR docs. #3792
  • Document that Where supports multidirectional broadcasting. #3827
  • Sync build documentation in CONTRIBUTING.md. #3859
  • [CI][Doc] Add CI Pipelines doc/node tests verification. #3780
  • Remind release manager to remove old onnx-weekly packages after release. #3923
  • Fix the bug of shape in docs. #3927
  • Clean up README. #3961
  • Remove documentation about Python 2. #3963

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Notes

  • Beware of the protobuf version gap issue (building onnx with protobuf>=3.12 is not compatible with older protobuf)

Additional Notes

  • ONNX will drop Python 3.6 support in next release because it has reached EOL.
  • ONNX will upgrade its NumPy version to 1.21.5 before next release to resolve vulnerability issue for old NumPy 1.16.6.
  • There will be infrastructure change to Linux packaging system to replace manylinux2010 with manylinux2014 or manylinux2.

Contributors

Thanks to these individuals for their contributions in this release since last 1.10.0 release. (Contributor list obtained with: https://github.com/onnx/onnx/graphs/contributors?from=2021-07-30&to=2022-02-08&type=c): @jcwchen, @gramalingam, @garymm, @mhamilton723, @TomWildenhain-Microsoft, @neginraoof, @xuzijian629, @liqunfu, @gwang-msft, @chudegao, @AlexandreEichenberger, @rajeevsrao, @matteosal, @stillmatic, @askhade, @liuyu21, @jantonguirao, @shinh, @kevinch-nv, @shubhambhokare1, @hwangdeyu, @jiafatom, @postrational, @snnn, @jackwish

v1.10.2

2 years ago

ONNX v1.10.2 is a patch release based on v1.10.1.

Bug fixes:

  • Fix compilation error on older compilers (#3683)
  • Stricter check for Shape's input: check input type (#3757)

v1.10.1

2 years ago

This release is a patch release based on v1.10.0.

Bug fix:

v1.10.0

2 years ago

ONNX v1.10.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Key Updates

IR Updates

Opset version 15

API

Infrastructure

Bug fixes

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Notes

  • Beware of the protobuf version gap issue (building onnx with protobuf>=3.12 is not compatible with older protobuf)

Contributors

Thanks to these individuals for their contributions in this release: @jcwchen, @askhade, @gramalingam, @neginraoof, @matteosal, @postrational, @garymm, @yuslepukhin, @fdwr, @jackwish, @manbearian, @etusien, @impactaky, @rajeevsrao, @prasanthpul, @take-cheeze, @chudegao, @mindest, @yufenglee, @annajung, @hwangdeyu, @calvinmccarter-at-lightmatter, @ashbhandare, @xuzijian629, @IceTDrinker, @mrry