Mmclassification Versions Save

OpenMMLab Pre-training Toolbox and Benchmark

v0.24.0

1 year ago

Highlights

  • Support HorNet, EfficientFormerm, SwinTransformer V2, and MViT backbones.
  • Support Standford Cars dataset.

New Features

  • Support HorNet Backbone. (#1013)
  • Support EfficientFormer. (#954)
  • Support Stanford Cars dataset. (#893)
  • Support CSRA head. (#881)
  • Support Swin Transform V2. (#799)
  • Support MViT and add checkpoints. (#924)

Improvements

  • [Improve] replace loop of progressbar in api/test. (#878)
  • [Enhance] RepVGG for YOLOX-PAI. (#1025)
  • [Enhancement] Update VAN. (#1017)
  • [Refactor] Re-write get_sinusoid_encoding from third-party implementation. (#965)
  • [Improve] Upgrade onnxsim to v0.4.0. (#915)
  • [Improve] Fixed typo in RepVGG. (#985)
  • [Imporve] Using train_step instead of forward in PreciseBNHook (#964)
  • [Improve] Use forward_dummy to calculate FLOPS. (#953)

Bug Fixes

  • Fix warning with torch.meshgrid. (#860)
  • Add matplotlib minimum version requirements. (#909)
  • val loader should not drop last by default. (#857)
  • Fix config.device bug in toturial. (#1059)
  • Fix attenstion clamp max params (#1034)
  • Fix device mismatch in Swin-v2. (#976)
  • Fix the output position of Swin-Transformer. (#947)

Docs Update

  • Add version for torchvision to avoid error. (#903)
  • Fix typo for --out-dir option of analyze_results.py. (#898)
  • Refine the docstring of RegNet. (#935)

Contributors

A total of 19 developers contributed to this release.

@a-mos @Ezra-Yu @Fei-Wang @nijkah @PeterH0323 @yingfhu @techmonsterwang @JiayuXu0 @jlim262 @hukkai @mzr1996 @liu-mengyang @twmht @pallgeuer @timothylimyl @daquexian @okotaku @tpoisonooo @zzc98

v1.0.0rc1

1 year ago

Highlights

  • Support MViT, EdgeNeXt, Swin-Transformer V2, EfficientFormer and MobileOne.
  • Support BEiT type transformer layer.

New Features

  • Support MViT for MMCLS 1.x (#1023)
  • Add ViT huge architecture. (#1049)
  • Support EdgeNeXt for dev-1.x. (#1037)
  • Support Swin Transformer V2 for MMCLS 1.x. (#1029)
  • Add efficientformer Backbone for MMCls 1.x. (#1031)
  • Add MobileOne Backbone For MMCls 1.x. (#1030)
  • Support BEiT Transformer layer. (#919)

Improvements

  • [Refactor] Fix visualization tools. (#1045)
  • [Improve] Update benchmark scripts (#1028)
  • [Imporve] Update tools to enable pin_memory and persistent_workers by default. (#1024)
  • [CI] Update circle-ci and github workflow. (#1018)

Bug Fixes

  • Fix verify dataset tool in 1.x. (#1062)
  • Fix loss_weight in LabelSmoothLoss. (#1058)
  • Fix the output position of Swin-Transformer. (#947)

Docs Update

  • Fix typo in migration document. (#1063)
  • Auto generate model summary table. (#1010)
  • Refactor new modules tutorial. (#998)

Contributors

A total of 8 developers contributed to this release.

@Ezra-Yu @yingfhu @mzr1996 @tonysy @fangyixiao18 @YuanLiuuuuuu @HIT-cwh @techmonsterwang

v1.0.0rc0

1 year ago

MMClassification 1.0.0rc0 is the first version of MMClassification 1.x, a part of the OpenMMLab 2.0 projects.

Built upon the new training engine, MMClassification 1.x unifies the interfaces of dataset, models, evaluation, and visualization.

And there are some BC-breaking changes. Please check the migration tutorial for more details.

v0.23.2

1 year ago

New Features

  • Support MPS device. (#894)

Improvements

  • Add test mim CI. (#879)

Bug Fixes

  • [Fix] Fix Albu crash bug. (#918)
  • [Fix] Add mim to extras_require in setup.py. (#872)

Contributors

A total of 2 developers contributed to this release.

@mzr1996 @PeterH0323

v0.23.1

1 year ago

Highlights

  • New WandbHook to store your training log and visualize validation results!

New Features

  • [Feature] Dedicated MMClsWandbHook for MMClassification (Weights and Biases Integration) (#764)

Improvements

  • [Refactor] Use mdformat instead of markdownlint to format markdown. (#844)

Bug Fixes

  • [Fix] Fix wrong --local_rank.

Docs Update

  • [Docs] Update install tutorials. (#854)
  • [Docs] Fix wrong link in README. (#835)

Contributors

A total of 3 developers contributed to this release.

@ayulockin @mzr1996 @timothylimyl

v0.23.0

2 years ago

New Features

  • Support DenseNet. (#750)
  • Support VAN. (#739)

Improvements

  • Support training on IPU and add fine-tuning configs of ViT. (#723)

Docs Update

  • New style API reference, and easier to use! Welcome view it. (#774)

Contributors

A total of 4 developers contributed to this release.

@mzr1996 @okotaku @yingfhu @HuDi2018

v0.22.1

2 years ago

New Features

  • Support resize relative position embedding in SwinTransformer. (#749)
  • Add PoolFormer backbone and checkpoints. (#746)

Improvements

  • Improve CPE performance by reduce memory copy. (#762)
  • Add extra dataloader settings in configs. (#752)

Contributors

A total of 4 developers contributed to this release.

@mzr1996 @yuweihao @XiaobingSuper @YuanLiuuuuuu

v0.22.0

2 years ago

Considering more and more codebase depends on new features of MMClassification, we will release a minor version at the middle of every month. 😉

Highlights

  • Support a series of CSP Network, such as CSP-ResNet, CSP-ResNeXt and CSP-DarkNet.
  • A new CustomDataset class to help you build dataset of yourself!
  • Support ConvMixer, RepMLP and new dataset - CUB dataset.

New Features

  • Add CSPNet and backbone and checkpoints (#735)
  • Add CustomDataset. (#738)
  • Add diff seeds to diff ranks. (#744)
  • Support ConvMixer. (#716)
  • Our dist_train & dist_test tools support distributed training on multiple machines. (#734)
  • Add RepMLP backbone and checkpoints. (#709)
  • Support CUB dataset. (#703)
  • Support ResizeMix. (#676)

Improvements

  • Use --a-b instead of --a_b in arguments. (#754)
  • Add get_cat_ids and get_gt_labels to KFoldDataset. (#721)
  • Set torch seed in worker_init_fn. (#733)

Bug Fixes

  • Fix the discontiguous output feature map of ConvNeXt. (#743)

Docs Update

  • Add brief installation steps in README for copy&paste. (#755)
  • fix logo url link from mmocr to mmcls. (#732)

Contributors

A total of 6 developers contributed to this release.

@Ezra-Yu @yingfhu @Hydrion-Qlz @mzr1996 @huyu398 @okotaku

v0.21.0

2 years ago

Highlights

  • Support ResNetV1c and Wide-ResNet, and provide pre-trained models.
  • Support dynamic input shape for ViT-based algorithms. Now our ViT, DeiT, Swin-Transformer and T2T-ViT supports forwarding with any input shape.
  • Reproduce training results of DeiT. And our DeiT-T and DeiT-S have higher accuracy comparing with the official weights.

New Features

  • Add ResNetV1c. (#692)
  • Support Wide-ResNet. (#715)
  • Support gem pooling (#677)

Improvements

  • Reproduce training results of DeiT. (#711)
  • Add ConvNeXt pretrain models on ImageNet-1k. (#707)
  • Support dynamic input shape for ViT-based algorithms. (#706)
  • Add evaluate function for ConcatDataset. (#650)
  • Enhance vis-pipeline tool. (#604)
  • Return code 1 if scripts runs failed. (#694)
  • Use PyTorch official one_hot to implement convert_to_one_hot. (#696)
  • Add a new pre-commit-hook to automatically add a copyright. (#710)
  • Add deprecation message for deploy tools. (#697)
  • Upgrade isort pre-commit hooks. (#687)
  • Use --gpu-id instead of --gpu-ids in non-distributed multi-gpu training/testing. (#688)
  • Remove deprecation. (#633)

Bug Fixes

  • Fix Conformer forward with irregular input size. (#686)
  • Add dist.barrier to fix a bug in directory checking. (#666)

Contributors

A total of 8 developers contributed to this release.

@Ezra-Yu @HumberMe @mzr1996 @twmht @RunningLeon @yasu0001 @okotaku @yingfhu

v0.20.1

2 years ago

Bug Fixes

  • Fix the MMCV dependency version.