Onnx Versions Save

Open standard for machine learning interoperability

v1.3.0

5 years ago
  • ONNXIFI 1.0
  • Operator Set 8
    • Control Flow Operators graduated from experimental
    • Added new operator Expand
    • Updated operators Max, Min, Mean and Sum to support broadcasting
    • Support output indices in operator MaxPool
    • Varies documentation improvements
  • Introduced Function concept for representing composed operators [experimental]
  • Enhanced shape inference
    • Support shape inference for Reshape operator with constant new shape
  • More ONNX optimization passes
    • Available passes are here
  • More operator backend tests
  • Opset Version Converter
    • Supported operators include: Add, Mul, Gemm, Relu, BatchNorm, Concat, Reshape, Sum, MaxPool, AveragePool, Dropout
    • All models in model zoo are covered, except tiny-yolo-v2 (PRelu needs adapter, WIP)
  • Quantization coming soon
    • We are currently working with the community to collect more feedback and finalize. We expect this to happen quickly and will be released as quickly as possible and out of cycle if needed.

v1.2.2

5 years ago

This release is a patch release based on v1.2.1:

Bug fixes:

  • #1040 - Update proto files
  • #1044 - Fix Operator tests (test data fix)
  • #1052 - Fix Proto3 issues
  • #1053 - Type and shape inference code fix
  • #1057 - Op schema code fix
  • #1058 - Remove empty model (test data fix)
  • #1060 - Type and shape inference code fix
  • #1063 - PReLU version fix
  • #1064 - Pytorch generated test case removal (test data fix)
  • #1069 - Remove erroneous documentation around maps and sequences (description only)
  • #1070 - Add more check for type and shape inference code
  • #1090 - Fix local region definition in LRN spec (description only)
  • #1102 - Add float16 support back for math and reduction ops
  • #1103 - Make RNN/LSTM/GRU treatment of recurrent weights consistent
  • #1104 - Remove/replace /MX with /WX for MSVC build (build fix)
  • #1105 - Add ignoring flags (build fix)
  • #1107 - Fix the LRS’s doc (description only)

v1.2.1

5 years ago

ONNX 1.2.1 release.

The following changes have been made since the 1.1.2 release:

IR Changes

  • Adds function and attribute reference (PR #802).
  • Adds dimension denotation (PR #443) and type denotation (PR #879).

Operator Changes

The operator set version of onnx 1.2 is 7 for ONNX domain and 1 for ONNX_ML domain.

  • Type and shape inference function added for all operators.
  • Adds new operators. o Upsample (PR #861) – promoted from experimental, attributes and behavior updated to support arbitrary # of dimensions. o Identity (PR #892) – promoted from experimental. o Acos, Asin, Atan, Cos, Sin, Tan (PR #869). o Multinomial (PR #897)
  • Removes FC (experimental) op (PR #977).
  • Moves to numpy broadcasting semantics (PR #907).
  • Clarifies “optional” semantics for input/output and adjust RNN/GRU/LSTM/BatchNormalization/Dropout accordingly (PR #1006, PR #1014).
  • AveragePool – formulas for output shape updated (PR #751), extended to support average count including padding (PR #884)
  • BatchNormalization – clarify outputs can be n-dim (PR #733)
  • Cast – change to attr from string to int (PR #727)
  • ConstantFill (exp) – change value attr from optional to default value of 0 (PR #808)
  • InstanceNormalization – clarify outputs can be n-dim (PR #733)
  • MaxPool – formulas for output shape updated (PR #751)
  • AveragePool, MaxPool, Conv – update to support dimension denotation (PR #443)
  • Reshape – add output shape as an input (PR #608)
  • Size – change output from int to scalar tensor (PR #759)
  • Tile – replace tiles and axis inputs with repeats to match numpy (PR #757)
  • ZipMap – update type constrains from map to seq (PR #818)
  • Affine – add default values for alpha and beta attributes (PR #820)
  • FeatureVectorizer – update behavior (PR #843)
  • LinearClassifier – coefficient attribute is now required (PR #836)
  • RandomNormalLike, RandomUniformLike – change input type constraints and change behavior to copy shape instead of compute it (PR #846)
  • Selu – change default value of attributes to match other frameworks (PR #839)
  • ArgMax, ArgMin – specify default values for axis attribute (PR #847)
  • DepthToSpace, SpaceToDepth – blocksize attribute is now required (PR #847)
  • GRU, LSTM, RNN – specify default value for activation_* attributes (PR #847)
  • Reduce* – specify default behavior for axes attribute (PR #847)
  • Concat, Gather, Squeeze, Unsqueeze – accept any tensor type (PR #957)
  • Add, Div, Mul, Pow, Sub – enhance 1-element broadcast case (PR #902)
  • Pad – clarify pads attribute (PR #962)
  • LRN – specify default values and clarify behavior (PR #965)
  • ConvTranspose – clarify padding behavior and remove restriction on output_padding attribute (PR #1012)
  • All ops – updated type constraints (PR #666)

v1.1.2

6 years ago

This release is a patch release based on v1.1.0 (v1.1.1):

Bug fixes:

  • #775 - Align Python and C++ schema API for ONNX-ML
  • #781 - Fix some checker implementation not ideal for ONNX-ML
  • #799 - Update specs for ONNX ML

v1.1.0

6 years ago

Change log for the release:

  • Operators fixed and added - cast, reshape, Pool, Shape, Size, concat, Pow, Slice, TopK, structured, reducible control flow( experimental), Unsqueeze (PR #497) (PR #436) (PR #496) (PR #529) (PR #513) (PR #525) (PR #390) (PR #532) (PR #587) (PR #569) (PR #552)

  • Test cases added and fixed - global avg, max pool, Slice, cast, pow, Concat, Reshape, TopK, softplus, softsign, softmax, logsoftmax, hardmax transpose, Max, Min, Mean, Sum. +9 math operators, reciprocal, logic operators, Clip, Div, Mul, Pow, Sub; Elu, LeakyRelu, Selu, HardSigmoid, gather, Conv
    (PR #468) (PR #472) (PR #487) (PR #500) (PR #507) (PR #516) (PR #529) (PR #506) (PR #509) (PR #546) (PR #548) (PR #543) (PR #574) (PR #596)

  • Build Issues on various platforms - ** Provide option to enforce /MD or /MT when building with MSVC (PR #602) ** Fix ONNX library build for Windows ** Add to_string for Android (PR #597) ** Handle situations where protobuf is built on the fly (PR #592) ** fix CMakeLists on Windows (PR #589) ** travis tweaks to make sure the correct versions of python are installed (PR #584) ** Improve CMakefile of ONNX (PR #563) ** Don't include pybind11 if its target is alreadt exported (PR #550) ** Call gen_proto.py in cmake (PR #538) ** Couple cmake fixes (PR #521) ** Fix build on mac (PR #514) ** setup cmake (PR #469) ** Remove onnx-caffe2 reference (PR #558)

  • Naming and Convention changes - ** Add ONNX_NAMESPACE around rnn/old.cc (PR #605) ** Change the model file extension from .pb to .onnx (PR #541) ** Make onnx namespace configurable (PR #484)

  • Bug Fixes - ** Fix get_attribute_value can not get g field bug (PR #599) ** Fix treatment of optional inputs.

  • Test Framework Changes - ** Add outputs_info into run_node backend interface (PR #588)

  • IR Changes - ** Add option to use customized protoc (PR #594) ** preserve value infos if they are needed (PR #561) ** Check whether perm exists before using it (PR #559) ** Adding int32, int64 and double input data types for featurevectorizer (PR #547) ** Sort the attributes of NodeProto generated by make_node (PR #479)

  • Other Changes - ** Change the cached model checking logic (PR #545) ** Fix the way we find protobuf library (PR #539) ** Modulize ONNX libraries (PR #528) ** Printable Graph support for nested graphs + sugar (PR #483) ** Lexical scoping in checker (PR #485) ** osx travis support (PR #566)

v1.0.1

6 years ago

This is a patch release on top of v1.0

Bug Fixes:

  • #432 - ONNX PyPi install fails when git is not installed on host.

v1.0

6 years ago

This release is the first stable version of ONNX.

This version also includes the ONNX-ML profile that extends ONNX with classic ML constructs. This is an optional profile.

The following changes have been made since the 0.2 release:

Spec Changes

  • Adds versioning documentation
  • Adds release management notes
  • Operator specs include samples

IR Changes

  • Adds operator sets, imports and experimental operator support.
  • Adds an AttributeType enum, doc_string fields, domain for NodeProto.
  • Adds named metadata properties to models.
  • Remove sparse tensor protos.
  • Checker now available in C++ with Python wrapper.

Operator Changes

  • Adds Identity, Affine, ThresholdRelu, ScaledTanh, ParametricSoftplus, ImageScaler, MeanVarianceNormalization, Crop, Embedding, HardSigmoid, Mean, Clip, LogSoftmax, Hardmax, Softsign, Softplus, MatMul, InstanceNormalization, LRN, ReduceSumSquare, ReduceLogSum, ReduceL1, ReduceL2, RNN, GRU, LSTM, SpaceToDepth, DepthToSpace, Tile.
  • Adds And, Or, Xor, Greater, Less, Equal, Not.
  • Removes Caffe2ConvTranspose, SpatialBN, LRN, ChannelShuffle, RecurrentNetwork.
  • Replaces Normalization with LpNormalization.
  • Adds type constraints.
  • Much improved tests for operators and reporting.

v0.2

6 years ago

Spec changes

  • Type and shape annotations for the model (required for inputs/outputs, optional for internal values)

Breaking changes

onnx.proto underwent breaking changes that makes earlier serialized protobufs invalid. We commit to have all changes to the protobuf structure backward-compatible after this (v0.2) release.

Specific changes:

  • Introduction of ModelProto to represent top-level model in addition to GraphProto
  • Related API changes renaming graph to model
  • Addition of type and optional shape annotations for inputs and outputs of the graph

Operator spec changes

  • Added Gemm
  • Added Pad
  • Added Constant (graduated from experimental to non-experimental)
  • In Conv and ConvTranspose renamed attribute “filter” to “weights”
  • In Elu added “alpha” attribute
  • Concat fixed output number from 2 to 1
  • Dropout changed output number from 2 to (1 or 2)
  • Added OptimizedRNN operator representing entire RNN stack similarly to CuDNN
  • ATen support as an experimental operator that allows to directly represent any PyTorch's tensor functions (which leverage ATen).

New Tutorials