Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
coremltools.compression_utils
.coremltools.ImageType
used with inputs.useCPUOnly
parameter from coremltools.convert
and coremltools.models.MLModel
. Use coremltools.ComputeUnit
instead.Known issues
predict
API in coremltools to crash when the either the input or output is of type grayscale float16MLComputeUnitsCPUAndNeuralEngine
is not available in coremltools yetbitwise_not
dim
dot
eye
fill
hardswish
linspace
mv
new_full
new_zeros
rrelu
selu
DivNoNan
Log1p
SparseSoftmaxCrossEntropyWithLogits
broadcast_tensors
, frobenius_norm
, full
, norm
and scatter_add
.convert_to
argument with the unified converter API to indicate the model type of the Core ML model.
coremltools.convert(..., convert_to=“mlprogram”)
converts to a Core ML model of type ML program.coremltools.convert(..., convert_to=“neuralnetwork”)
converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just coremltools.convert(...)
will default to produce a neural network Core ML model.ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32)
or ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)
save
method. Simply use model.save("<model_name>.mlpackage")
instead of the usual model.save(<"model_name>.mlmodel")
compute_units
parameter to MLModel and coremltools.convert. This matches the MLComputeUnits
in Swift and Objective-C. Use this parameter to specify where your models can run:
ALL
- use all compute units available, including the neural engine.CPU_ONLY
- limit the model to only use the CPU.CPU_AND_GPU
- use both the CPU and GPU, but not the neural engine.ct.convert(....., skip_model_load=True)
convert_neural_network_weights_to_fp16()
, convert_neural_network_spec_weights_to_fp16()
, that had been deprecated in coremltools 4, have been removed.useCPUOnly
parameter for MLModel and MLModel.predicthas been deprecated. Instead, use the compute_units
parameter for MLModel and coremltools.convert.torch_tensor_assign
op and index_put_
op . Fixed bugs in translation of expand
ops and sort
ops.compute_units
parameter to MLModel and coremltools.convert. Use this to specify where your models can run:
ALL
- use all compute units available, including the neural engine.CPU_ONLY
- limit the model to only use the CPU.CPU_AND_GPU
- use both the CPU and GPU, but not the neural engine.useCPUOnly
parameter for MLModel and coremltools.convert.compute_precision
parameter of coremltools.convert
.coremltools.utils.rename_feature
utility for ML Program specTo install this version run: pip install coremltools==5.0b1
convert_to
argument with the unified converter API to indicate the model type of the Core ML model.
coremltools.convert(..., convert_to=“mlprogram”)
converts to a Core ML model of type ML program.coremltools.convert(..., convert_to=“neuralnetwork”)
converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just coremltools.convert(...)
will default to produce a neural network Core ML model.ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32)
or ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)
save
method. Simply use model.save("<model_name>.mlpackage")
instead of the usual model.save(<"model_name>.mlmodel")
convert_neural_network_weights_to_fp16()
, convert_neural_network_spec_weights_to_fp16()
, that had been deprecated in coremltools 4, have been removed.precision.FLOAT32
, although it will be updated to precision.FLOAT16
in a later beta release, prior to the official coremltools 5.0 release.useCPUOnly
argument during conversion. That is, ct.convert(source_model, convert_to='mlprogram', useCPUOnly=True)
. And for such models, in your swift code you can use the MLComputeUnits.cpuOnly option at the time of loading the model, to restrict the compute unit to CPU.