Mmrazor Versions Save

OpenMMLab Model Compression Toolbox and Benchmark.

v1.0.0

1 year ago

v1.0.0 (24/04/2023)

We are excited to announce the first official release of MMRazor 1.0.

Highlights

  • MMRazor quantization is released, which has got through task models and model deployment. With its help, we can quantize and deploy pre-trained models in OpenMMLab to specified backend quickly.

New Features & Improvements

NAS

Pruning

KD

Quantization

Bug Fixes

Contributors

A total of 23 developers contributed to this release. Thanks @415905716 @gaoyang07 @humu789 @LKJacky @HIT-cwh @aptsunny @cape-zck @vansin @twmht @wm901115nwpu @Hiwyl @NickYangMin @spynccat @sunnyxiaohu @kitecats @TinyTigerPan @twmht @yivona08 @xinxinxinxu @cape-zck @Weiyun1025 @vansin @Lxtccc

New Contributors

Full Changelog: https://github.com/open-mmlab/mmrazor/compare/v0.3.1...v1.0.0

v1.0.0rc2

1 year ago

Changelog of v1.0.0rc2

v1.0.0rc2 (06/01/2022)

We are excited to announce the release of MMRazor 1.0.0rc2.

New Features

NAS

Pruning

Now, ChannelAnalyzer supports most of CNN models in torchvision, mmcls, mmseg and mmdet. We will continue to support more models.

from mmrazor.models.task_modules import ChannelAnalyzer
from mmengine.hub import get_model
import json

model = get_model('mmdet::retinanet/retinanet_r18_fpn_1x_coco.py')
unit_configs: dict = ChannelAnalyzer().analyze(model)
unit_config0 = list(unit_configs.values())[0]
print(json.dumps(unit_config0, indent=4))
# # short version of the config
# {
#     "channels": {
#         "input_related": [
#             {"name": "backbone.layer2.0.bn1"},
#             {“name": "backbone.layer2.0.conv2"}
#         ],
#         "output_related": [
#             {"name": "backbone.layer2.0.conv1"},
#             {"name": "backbone.layer2.0.bn1"}
#         ]
#     },
#}

KD

Bug Fixes

  • Fix FpnTeacherDistll techer forward from backbone + neck + head to backbone + neck(#387 )
  • Fix some expire configs and checkpoints(#373 #372 #422 )

Ongoing Changes

We will release Quantization in next version(1.0.0rc3)!

Contributors

A total of 11 developers contributed to this release: @wutongshenqiu @sunnyxiaohu @aptsunny @humu789 @TinyTigerPan @FreakieHuang @LKJacky @wilxy @gaoyang07 @spynccat @yivona08.

New Contributors

v1.0.0rc1

1 year ago

Changelog of v1.0.0rc1

v1.0.0rc1 (27/10/2022)

We are excited to announce the release of MMRazor 1.0.0rc1.

Highlights

  • New Pruning Framework:We have systematically refactored the Pruning module. The new Pruning module can more automatically resolve the dependencies between channels and cover more corner cases.

New Features

Pruning

  • A new pruning framework is released in this release. (#311, #313) It consists of five core modules, including Algorithm, ChannelMutator, MutableChannelUnit, MutableChannel and DynamicOp.

  • MutableChannelUnit is introduced for the first time. Each MutableChannelUnit manages all channels with channel dependency.

    from mmrazor.registry import MODELS
    
    ARCHITECTURE_CFG = dict(
        _scope_='mmcls',
        type='ImageClassifier',
        backbone=dict(type='MobileNetV2', widen_factor=1.5),
        neck=dict(type='GlobalAveragePooling'),
        head=dict(type='mmcls.LinearClsHead', num_classes=1000, in_channels=1920))
    model = MODELS.build(ARCHITECTURE_CFG)
    from mmrazor.models.mutators import ChannelMutator
    
    channel_mutator = ChannelMutator()
    channel_mutator.prepare_from_supernet(model)
    units = channel_mutator.mutable_units
    print(units[0])
    # SequentialMutableChannelUnit(
    #   name=backbone.conv1.conv_(0, 48)_48
    #   (output_related): ModuleList(
    #     (0): Channel(backbone.conv1.conv, index=(0, 48), is_output_channel=true, expand_ratio=1)
    #     (1): Channel(backbone.conv1.bn, index=(0, 48), is_output_channel=true, expand_ratio=1)
    #     (2): Channel(backbone.layer1.0.conv.0.conv, index=(0, 48), is_output_channel=true, expand_ratio=1)
    #     (3): Channel(backbone.layer1.0.conv.0.bn, index=(0, 48), is_output_channel=true, expand_ratio=1)
    #   )
    #   (input_related): ModuleList(
    #     (0): Channel(backbone.conv1.bn, index=(0, 48), is_output_channel=false, expand_ratio=1)
    #     (1): Channel(backbone.layer1.0.conv.0.conv, index=(0, 48), is_output_channel=false, expand_ratio=1)
    #     (2): Channel(backbone.layer1.0.conv.0.bn, index=(0, 48), is_output_channel=false, expand_ratio=1)
    #     (3): Channel(backbone.layer1.0.conv.1.conv, index=(0, 48), is_output_channel=false, expand_ratio=1)
    #   )
    #   (mutable_channel): SquentialMutableChannel(num_channels=48, activated_channels=48)
    # )
    

Our new pruning algorithm can help you develop pruning algorithm more fluently. Pelease refer to our documents PruningUserGuide for model detail.

Distillation

  • Support CRD, a distillation algorithm based on contrastive representation learning. (#281)

  • Support PKD, a distillation algorithm that can be used in MMDetection and MMDetection3D. #304

  • Support DEIT, a classic Transformer distillation algorithm.(#332)

  • Add a more powerful baseline setting for KD. (#305)

  • Add MethodInputsRecorder and FuncInputsRecorder to record the input of a class method or a function.(#320)

NAS

  • Support DSNAS, a nas algorithm that does not require retraining. (#226 )

Tools

  • Support configurable immediate feature map visualization. (#293 ) A useful tool is supported in this release to visualize the immediate features of a neural network. Please refer to our documents VisualizationUserGuide for more details.

Bug Fixes

  • Fix the bug that FunctionXXRecorder and FunctionXXDelivery can not be pickled. (#320)

Ongoing changes

  • Quantization: We are developing the basic interface of PTQ and QAT. RFC(Request for Comments) will be released soon.
  • AutoSlim: AutoSlim is not yet available and is being refactored.
  • Fx Pruning Tracer: Currently, the model topology can only be resolved through the backward tracer. In the future, both backward tracer and fx tracer will be supported.
  • More Algorithms: BigNAS、AutoFormer、GreedyNAS and Resrep will be released in the next few versions.
  • Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMRazor 1.x.

Contributors

A total of 12 developers contributed to this release. Thanks @FreakieHuang @gaoyang07 @HIT-cwh @humu789 @LKJacky @pppppM @pprp @spynccat @sunnyxiaohu @wilxy @kitecats @SheffieldCao

New Contributors

  • @kitecats made their first contribution in #334
  • @SheffieldCao made their first contribution in #299

v1.0.0rc0

1 year ago

Changelog of v1.x

v1.0.0rc0 (31/8/2022)

We are excited to announce the release of MMRazor 1.0.0rc0. MMRazor 1.0.0rc0 is the first version of MMRazor 1.x, a part of the OpenMMLab 2.0 projects. Built upon the new training engine, MMRazor 1.x simplified the interaction with other OpenMMLab repos, and upgraded the basic APIs of KD / Pruning / NAS. It also provides a series of knowledge distillation algorithms.

Highlights

  • New engines. MMRazor 1.x is based on MMEngine, which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.

  • Unified interfaces. As a part of the OpenMMLab 2.0 projects, MMRazor 1.x unifies and refactors the interfaces and internal logic of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logic to allow the emergence of multi-task/modality algorithms.

  • More configurable KD. MMRazor 1.x add Recorder to get the data needed for KD more automatically,Delivery to automatically pass the teacher's intermediate results to the student, and connector to handle feature dimension mismatches between teacher and student.

  • More kinds of KD algorithms. Benefitting from the powerful APIs of KD, we have added several categories of KD algorithms, data-free distillation, self-distillation, and zero-shot distillation.

  • Unify the basic interface of NAS and Pruning. We refactored Mutable, adding mutable value and mutable channel. Both NAS and Pruning can be developed based on mutables.

  • More documentation and tutorials. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it here.

Breaking Changes

Training and testing

  • MMRazor 1.x runs on PyTorch>=1.6. We have deprecated the support of PyTorch 1.5 to embrace the mixed precision training and other new features since PyTorch 1.6. Some models can still run on PyTorch 1.5, but the full functionality of MMRazor 1.x is not guaranteed.
  • MMRazor 1.x uses Runner in MMEngine rather than that in MMCV. The new Runner implements and unifies the building logic of dataset, model, evaluation, and visualizer. Therefore, MMRazor 1.x no longer maintains the building logics of those modules in mmdet.train.apis and tools/train.py. Those code have been migrated into MMEngine.
  • The Runner in MMEngine also supports testing and validation. The testing scripts are also simplified, which has similar logic as that in training scripts to build the runner.

Configs

Components

  • Algorithms
  • Distillers
  • Mutators
  • Mutables
  • Hooks

Improvements

  • Support mixed precision training of all the models. However, some models may got Nan results due to some numerical issues. We will update the documentation and list their results (accuracy of failure) of mixed precision training.

Bug Fixes

  • AutoSlim: Models of different sizes will no longer have the same size checkpoint

New Features

Ongoing changes

  • Quantization: We are developing the basic interface of PTQ and QAT. RFC(Request for Comments) will be released soon.
  • AutoSlim: AutoSlim is not yet available and is being refactored.
  • Fx Pruning Tracer: Currently, the model topology can only be resolved through the backward tracer. In the future, both backward tracer and fx tracer will be supported.
  • More Algorithms: BigNAS、AutoFormer、GreedyNAS and Resrep will be released in the next few versions.
  • Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMRazor 1.x.

Contributors

A total of 13 developers contributed to this release. Thanks @FreakieHuang @gaoyang07 @HIT-cwh @humu789 @LKJacky @pppppM @pprp @spynccat @sunnyxiaohu @wilxy @wutongshenqiu @NickYangMin @Hiwyl Special thanks to @Davidgzx for his contribution to the data-free distillation algorithms

v0.3.1

2 years ago

Features

  • Support different dataloader in using different settings (#141)

Bug Fixes

  • Fixed the inconsistent results of broadcast_object_list on multiple machines (#153 )
  • Fixed the bug that the NAS model cannot be searched in non-distributed mode (#153)
  • Fixed the bug that tools/mmseg/train_mmseg.py cannot train properly (#152)
  • Fixed the bug that models containing GroupNorm or InstanceNorm cannot be pruned (#144)

Improvements

  • Add default mutable_cfg, channel_cfg and teacher_checkpoint in configs to reduce the use of cfg-options (#149)

Documents

  • Fixed broken links in readthedocs (#142)

v0.3.0

2 years ago

Features

  • Support MMDeploy(#102)
  • Support Relational Knowledge Distillation(CVPR 2019)(#127)
  • Support different seeds on different ranks when distributed training(#113)
  • StructurePruner supports trace models which contain Dilated Conv2d, such as YOLOF(#113)
  • StructurePruner supports trace models which contain share modules, such as RetinaNet(#113)

Bug Fixes

  • Fix the bug that the pruner can't trace shared modules rightly(#113)
  • Fix the bug that the pruner can't trace modules whose requires_grad is False (#113 )
  • Fix the bug that the pruner will affect the statistic of BatachNorm(#81 )

Improvements

  • Update distributed train tools to support training with multi nodes(#114 )
  • Sync mmdet and mmcls latest version of apis(#115)

Documents

  • Add brief installation steps in README(#121 )
  • Add real examples in GET_STARTED related docs(#133 )

v0.2.0

2 years ago

Highlights

  • Support MobileNet series search space(#82)

Features

  • Support CPU training(#62)
  • Support resuming from the latest checkpoint automatically(#61)

Bug Fixes

  • Fix the bug of show_result during the test(#52 )
  • Fix bug in non-distributed training/testing for all tasks(#63)
  • Fix the incorrect value of KLDivergence(#35)
  • Fix the config error of WSLD(#26)
  • Fix the config error of DetNAS(#103)
  • Fix the bug of slurm_train_mmcls.sh(#90)

Improvements

  • Add distributed train/test tools(#105)

Documents

  • Fix some typos(#6, #16, #18, #73)
  • Fix some mistakes in docstring(#24, #29)

v0.1.0

2 years ago

Highlights

MMRazor v0.1.0 is released.

Major Features

  • Compatibility

    MMRazor can be easily applied to various projects in OpenMMLab, due to similar architecture design of OpenMMLab as well as the decoupling of slimming algorithms and vision tasks.

  • Flexibility

    Different algorithms, e.g., NAS, pruning and KD, can be incorporated in a plug-n-play manner to build a more powerful system.

  • Convenience

    With better modular design, developers can implement new model compression algorithms with only a few codes, or even by simply modifying config files.