Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers advanced quantization and compression tools for deploying state-of-the-art neural networks.
Model Compression Toolkit (MCT) is an open-source project for neural network model optimization under efficient, constrained hardware.
This project provides researchers, developers, and engineers tools for optimizing and deploying state-of-the-art neural networks on efficient hardware.
Specifically, this project aims to apply quantization to compress neural networks.
MCT is developed by researchers and engineers working at Sony Semiconductor Israel.
This section provides an installation and a quick starting guide.
To install the latest stable release of MCT, run the following command:
pip install model-compression-toolkit
For installing the nightly version or installing from source, refer to the installation guide.
Explore the Model Compression Toolkit (MCT) through our tutorials, covering compression techniques for Keras and PyTorch models. Access interactive notebooks for hands-on learning. For example:
Additionally, for quick quantization of a variety of models from well-known collections, visit the quick-start page and the results CSV.
Currently, MCT is being tested on various Python, Pytorch and TensorFlow versions:
PyTorch 1.13 | PyTorch 2.0 | PyTorch 2.1 | |
---|---|---|---|
Python 3.9 | |||
Python 3.10 | |||
Python 3.11 |
TensorFlow 2.12 | TensorFlow 2.13 | TensorFlow 2.14 | TensorFlow 2.15 | |
---|---|---|---|---|
Python 3.9 | ||||
Python 3.10 | ||||
Python 3.11 |
MCT offers a range of powerful features to optimize neural network models for efficient deployment. These supported features include:
MCT provides tools for generating synthetic images based on the statistics stored in a model's batch normalization layers. These generated images are valuable for various compression tasks where image data is required, such as quantization and pruning. You can customize data generation configurations to suit your specific needs. Go to the Data Generation page.
MCT supports different quantization methods:
Quantization Method | Complexity | Computational Cost |
---|---|---|
PTQ | Low | Low (order of minutes) |
GPTQ (parameters fine-tuning using gradients) | Mild | Mild (order of 2-3 hours) |
QAT | High | High (order of 12-36 hours) |
In addition, MCT supports different quantization schemes for quantizing weights and activations:
Main features:
As part of the GPTQ we provide an advanced optimization algorithm called EPTQ.
The specifications of the algorithm are detailed in the paper: "EPTQ: Enhanced Post-Training Quantization via Label-Free Hessian" [4].
More details on the how to use EPTQ via MCT can be found in the EPTQ guidelines.
MCT introduces a structured and hardware-aware model pruning. This pruning technique is designed to compress models for specific hardware architectures, taking into account the target platform's Single Instruction, Multiple Data (SIMD) capabilities. By pruning groups of channels (SIMD groups), our approach not only reduces model size and complexity, but ensures that better utilization of channels is in line with the SIMD architecture for a target Resource Utilization of weights memory footprint. Keras API Pytorch API
Some features are experimental and subject to future changes.
For more details, we highly recommend visiting our project website where experimental features are mentioned as experimental.
Graph of MobileNetV2 accuracy on ImageNet vs average bit-width of weights, using single-precision quantization, mixed-precision quantization, and mixed-precision quantization with GPTQ.
For more results, please see [1]
We quantized classification networks from the torchvision library. In the following table we present the ImageNet validation results for these models:
Network Name | Float Accuracy | 8Bit Accuracy | Data-Free 8Bit Accuracy |
---|---|---|---|
MobileNet V2 [3] | 71.886 | 71.444 | 71.29 |
ResNet-18 [3] | 69.86 | 69.63 | 69.53 |
SqueezeNet 1.1 [3] | 58.128 | 57.678 |
For more results, please refer to quick start.
Results for applying pruning to reduce the parameters of the following models by 50%:
Model | Dense Model Accuracy | Pruned Model Accuracy |
---|---|---|
ResNet50 [2] | 75.1 | 72.4 |
DenseNet121 [3] | 74.44 | 71.71 |
If the accuracy degradation of the quantized model is too large for your application, check out the Quantization Troubleshooting for common pitfalls and some tools to improve quantization accuracy.
Check out the FAQ for common issues.
MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
*You will find more information about contributions in the Contribution guide.
[1] Habi, H.V., Peretz, R., Cohen, E., Dikstein, L., Dror, O., Diamant, I., Jennings, R.H. and Netzer, A., 2021. HPTQ: Hardware-Friendly Post Training Quantization. arXiv preprint.
[4] Gordon, O., Habi, H. V., & Netzer, A., 2023. EPTQ: Enhanced Post-Training Quantization via Label-Free Hessian. arXiv preprint