Coremltools Versions Save

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

6.0b1

1 year ago
  • MLProgram compression: affine quantization, palettize, sparsify. See coremltools.compression_utils.
  • New options to set input and output types to multi array of type float16, grayscale image of type float16 and set output type as images, similar to the coremltools.ImageType used with inputs.
  • Support for PyTorch 1.11.0.
  • Support for TensorFlow 2.8.
  • [API Breaking Change] Remove useCPUOnly parameter from coremltools.convert and coremltools.models.MLModel. Use coremltools.ComputeUnit instead.
  • Support for many new PyTorch and TensorFlow layers
  • Many bug fixes and enhancements.

Known issues

  • While conversion and CoreML models with Grayscale Float16 images should work with ios16/macos13 beta, the coremltools-CoreML python binding has an issue which would cause the predict API in coremltools to crash when the either the input or output is of type grayscale float16
  • The new Compute unit configuration MLComputeUnitsCPUAndNeuralEngine is not available in coremltools yet

5.2

2 years ago
  • Support latest version (1.10.2) of PyTorch
  • Support TensorFlow 2.6.2
  • Support New PyTorch ops:
    • bitwise_not
    • dim
    • dot
    • eye
    • fill
    • hardswish
    • linspace
    • mv
    • new_full
    • new_zeros
    • rrelu
    • selu
  • Support TensorFlow ops
    • DivNoNan
    • Log1p
    • SparseSoftmaxCrossEntropyWithLogits
  • Various bug fixes, clean ups and optimizations.
  • This is the final coremltools version to support Python 3.5

5.1

2 years ago
  • New supported PyTorch operations: broadcast_tensors, frobenius_norm, full, norm and scatter_add.
  • Automatic support for inplace PyTorch operations if non-inplace operation is supported.
  • Support PyTorch 1.9.1
  • Various other bug fixes, optimizations and improvements.

5.0

2 years ago

What’s New

  • Added a new kind of Core ML model type, called ML Program. TensorFlow and Pytorch models can now be converted to ML Programs.
    • To learn about ML Programs, how they are different from the classicial Core ML neural network types, and what they offer, please see the documentation here
    • Use the convert_to argument with the unified converter API to indicate the model type of the Core ML model.
      • coremltools.convert(..., convert_to=“mlprogram”) converts to a Core ML model of type ML program.
      • coremltools.convert(..., convert_to=“neuralnetwork”) converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just coremltools.convert(...) will default to produce a neural network Core ML model.
    • When targeting ML program, there is an additional option available to set the compute precision of the Core ML model to either float 32 or float16. The default is float16. Usage example:
      • ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32) or ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)
      • To know more about how this affects the runtime, see the documentation on Typed execution.
  • You can save to the new Model Package format through the usual coremltool’s save method. Simply use model.save("<model_name>.mlpackage") instead of the usual model.save(<"model_name>.mlmodel")
    • Core ML is introducing a new model format called model packages. It’s a container that stores each of a model’s components in its own file, separating out its architecture, weights, and metadata. By separating these components, model packages allow you to easily edit metadata and track changes with source control. They also compile more efficiently, and provide more flexibility for tools which read and write models.
    • ML Programs can only be saved in the model package format.
  • Adds the compute_units parameter to MLModel and coremltools.convert. This matches the MLComputeUnits in Swift and Objective-C. Use this parameter to specify where your models can run:
    • ALL - use all compute units available, including the neural engine.
    • CPU_ONLY - limit the model to only use the CPU.
    • CPU_AND_GPU - use both the CPU and GPU, but not the neural engine.
  • Python 3.9 Support
  • Native M1 support for Python 3.8 and 3.9
  • Support for TensorFlow 2.5
  • Support Torch 1.9.0
  • New Torch ops: affine_grid_generator, einsum, expand, grid_sampler, GRU, linear, index_put maximum, minimum, SiLUs, sort, torch_tensor_assign, zeros_like.
  • Added flag to skip loading a model during conversion. Useful when converting for new macOS on older macOS: ct.convert(....., skip_model_load=True)
  • Various bug fixes, optimizations and additional testing.

Deprecations and Removals

  • Caffe converter has been removed. If you are still using the Caffe converter, please use coremltools 4.
  • Keras.io and ONNX converters will be deprecated in coremltools 6. Users are recommended to transition to the TensorFlow/PyTorch conversion via the unified converter API.
  • Methods, such as convert_neural_network_weights_to_fp16(), convert_neural_network_spec_weights_to_fp16() , that had been deprecated in coremltools 4, have been removed.
  • The useCPUOnly parameter for MLModel and MLModel.predicthas been deprecated. Instead, use the compute_units parameter for MLModel and coremltools.convert.

5.0b5

2 years ago
  • Added support for pytorch conversion for tensor assignment statements: torch_tensor_assign op and index_put_ op . Fixed bugs in translation of expand ops and sort ops.
  • Model input/output name sanitization: input and output names for "neuralnetwork" backend are sanitized (updated to match regex [a-zA-Z_][a-zA-Z0-9_]*), similar to the "mlprogram" backend. So instead of producing input/output names such as "1" or "input/1", "var_1" or "input_1", names will be produced by the unified converter API.
  • Fixed a bug preventing a Model Package from being saved more than once to the same path.
  • Various bug fixes, optimizations and additional testing.

5.0b4

2 years ago
  • Fixes Python 3.5 and 3.6 errors when importing some specific submodules.
  • Fixes Python 3.9 import error for arm64. #1288

5.0b3

2 years ago
  • Native M1 support for Python 3.8 and Python 3.9
  • Adds the compute_units parameter to MLModel and coremltools.convert. Use this to specify where your models can run:
    • ALL - use all compute units available, including the neural engine.
    • CPU_ONLY - limit the model to only use the CPU.
    • CPU_AND_GPU - use both the CPU and GPU, but not the neural engine.
  • With the above change we are deprecating the useCPUOnly parameter for MLModel and coremltools.convert.
  • For ML programs the default compute precision has changed from Float 32 to Float 16. This can be overridden with the compute_precision parameter of coremltools.convert.
  • Support for TensorFlow 2.5
  • Removed scipy dependency
  • Various bug fixes and optimizations

5.0b2

2 years ago
  • Python 3.9 support
  • Ubuntu 18 support
  • Torch 1.9.0 support
  • Added flag to skip loading a model during conversion. Useful when converting for new macOS on older macOS.
  • New torch ops: affine_grid_generator, grid_sampler, linear, maximum, minimum, SiLUs
  • Fuse Activation SiLUs optimization
  • Add no-op transpose into noop_elimination
  • Various bug fixes and other improvements, including:
    • bug fix in coremltools.utils.rename_feature utility for ML Program spec
    • bug fix in classifier model conversion for ML Program target

5.0b1

2 years ago

To install this version run: pip install coremltools==5.0b1

Whats New

  • Added a new kind of Core ML model type, called ML Program. TensorFlow and Pytorch models can now be converted to ML Programs.
    • To learn about ML Programs, how they are different from the classicial Core ML neural network types, and what they offer, please see the documentation here
    • Use the convert_to argument with the unified converter API to indicate the model type of the Core ML model.
      • coremltools.convert(..., convert_to=“mlprogram”) converts to a Core ML model of type ML program.
      • coremltools.convert(..., convert_to=“neuralnetwork”) converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just coremltools.convert(...) will default to produce a neural network Core ML model.
    • When targeting ML program, there is an additional option available to set the compute precision of the Core ML model to either float 32 or float16. That is,
      • ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32) or ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)
      • To know more about how this affects the runtime, see the documentation on Typed execution.
  • You can save to the new Model Package format through the usual coremltool’s save method. Simply use model.save("<model_name>.mlpackage") instead of the usual model.save(<"model_name>.mlmodel")
    • Core ML is introducing a new model format called model packages. It’s a container that stores each of a model’s components in its own file, separating out its architecture, weights, and metadata. By separating these components, model packages allow you to easily edit metadata and track changes with source control. They also compile more efficiently, and provide more flexibility for tools which read and write models.
    • ML Programs can only be saved in the model package format.
  • Several performance improvements by adding new graph passes in the conversion pipeline for deep learning models, including “fuse_gelu”, “replace_stack_reshape”, “concat_to_pixel_shuffle”, “fuse_layernorm_or_instancenorm” etc
  • New Translation methods for Torch ops such as “einsum”, “GRU”, “zeros_like” etc
  • OS versions supported by coremltools 5.0b1: macOS10.15 and above, Linux with C++17 and above

Deprecations and Removals

  • Caffe converter has been removed. If you are still using the Caffe converter, please use coremltools 4.
  • Keras.io and ONNX converters will be deprecated in coremltools 6. Users are recommended to transition to the TensorFlow/PyTorch conversion via the unified converter API.
  • Methods, such as convert_neural_network_weights_to_fp16(), convert_neural_network_spec_weights_to_fp16() , that had been deprecated in coremltools 4, have been removed.

Known Issues

  • The default compute precision for conversion to ML Programs is set to precision.FLOAT32, although it will be updated to precision.FLOAT16 in a later beta release, prior to the official coremltools 5.0 release.
  • Core ML may downcast float32 tensors specified in ML Program model types when running on a device with Neural Engine support. Workaround: Restrict compute units to .cpuAndGPU in MLModelConfiguration for seed 1
  • Converting some models to ML Program may lead to an error (such as a segmentation fault or “Error in building plan”), due to a bug in the Core ML GPU runtime. Workaround: When using coremltools, you can force the prediction to stay on the CPU, without changing the prediction code, by specifying the useCPUOnly argument during conversion. That is, ct.convert(source_model, convert_to='mlprogram', useCPUOnly=True). And for such models, in your swift code you can use the MLComputeUnits.cpuOnly option at the time of loading the model, to restrict the compute unit to CPU.
  • Flexible input shapes, for image inputs have a bug when using with the ML Program type, in seed 1 of Core ML framework. This will be fixed in an upcoming seed release.
  • coremltools 5.0b1 supports python versions 3.5, 3.6, 3.7, 3.8. Support for python 3.9 will be enabled in a future beta release.

4.1

3 years ago
  • Support for python 2 deprecated. This release contains wheels for python 3.5, 3.6, 3.7, 3.8
  • PyTorch converter updates:
    • added translation methods for ops topK, groupNorm, log10, pad, stacked LSTMs
    • support for PyTorch 1.7
  • TensorFlow Converter updates:
    • Added translation functions for ops Mfcc, AudioSpectrogram
  • Miscellaneous Bug fixes