Mmdetection Versions Save

OpenMMLab Detection Toolbox and Benchmark

v2.24.1

2 years ago

What's Changed

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.24.0...v2.24.1

v2.24.0

2 years ago

Highlights

New Features

  • Support Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation, see example configs (#7501)

  • Support Class Aware Sampler, users can set

    data=dict(train_dataloader=dict(class_aware_sampler=dict(num_sample_class=1))))
    

    in the config to use ClassAwareSampler. Examples can be found in the configs of OpenImages Dataset. (#7436)

  • Support automatically scaling LR according to GPU number and samples per GPU. (#7482) In each config, there is a corresponding config of auto-scaling LR as below,

    auto_scale_lr = dict(enable=True, base_batch_size=N)
    

    where N is the batch size used for the current learning rate in the config (also equals to samples_per_gpu * gpu number to train this config). By default, we set enable=False so that the original usages will not be affected. Users can set enable=True in each config or add --auto-scale-lr after the command line to enable this feature and should check the correctness of base_batch_size in customized configs.

  • Support setting dataloader arguments in config and add functions to handle config compatibility. (#7668) The comparison between the old and new usages is as below.

    Before v2.24.0 Since v2.24.0
    data = dict(
        samples_per_gpu=64, workers_per_gpu=4,
        train=dict(type='xxx', ...),
        val=dict(type='xxx', samples_per_gpu=4, ...),
        test=dict(type='xxx', ...),
    )
    
    # A recommended config that is clear
    data = dict(
        train=dict(type='xxx', ...),
        val=dict(type='xxx', ...),
        test=dict(type='xxx', ...),
        # Use different batch size during inference.
        train_dataloader=dict(samples_per_gpu=64, workers_per_gpu=4),
        val_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2),
        test_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2),
    )
    
    # Old style still works but allows to set more arguments about data loaders
    data = dict(
        samples_per_gpu=64,  # only works for train_dataloader
        workers_per_gpu=4,  # only works for train_dataloader
        train=dict(type='xxx', ...),
        val=dict(type='xxx', ...),
        test=dict(type='xxx', ...),
        # Use different batch size during inference.
        val_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2),
        test_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2),
    )
    
  • Support memory profile hook. Users can use it to monitor the memory usages during training as below (#7560)

    custom_hooks = [
        dict(type='MemoryProfilerHook', interval=50)
    ]
    
  • Support to run on PyTorch with MLU chip (#7578)

  • Support re-spliting data batch with tag (#7641)

  • Support the DiceCost used by K-Net in MaskHungarianAssigner (#7716)

  • Support splitting COCO data for Semi-supervised object detection (#7431)

  • Support Pathlib for Config.fromfile (#7685)

  • Support to use file client in OpenImages dataset (#7433)

  • Add a probability parameter to Mosaic transformation (#7371)

  • Support specifying interpolation mode in Resize pipeline (#7585)

Bug Fixes

  • Avoid invalid bbox after deform_sampling (#7567)
  • Fix the issue that argument color_theme does not take effect when exporting confusion matrix (#7701)
  • Fix the end_level in Necks, which should be the index of the end input backbone level (#7502)
  • Fix the bug that mix_results may be None in MultiImageMixDataset (#7530)
  • Fix the bug in ResNet plugin when two plugins are used (#7797)

Improvements

  • Enhance load_json_logs of analyze_logs.py for resumed training logs (#7732)
  • Add argument out_file in image_demo.py (#7676)
  • Allow mixed precision training with SimOTAAssigner (#7516)
  • Updated INF to 100000.0 to be the same as that in the official YOLOX (#7778)
  • Add documentations of:
    • how to get channels of a new backbone (#7642)
    • how to unfreeze the backbone network (#7570)
    • how to train fast_rcnn model (#7549)
    • proposals in Deformable DETR (#7690)
    • from-scratch install script in get_started.md (#7575)
  • Release pre-trained models of
    • Mask2Former (#7595, #7709)
    • RetinaNet with ResNet-18 and release models (#7387)
    • RetinaNet with EfficientNet backbone (#7646)

Contributors

A total of 27 developers contributed to this release. Thanks @jovialio, @zhangsanfeng2022, @HarryZJ, @jamiechoi1995, @nestiank, @PeterH0323, @RangeKing, @Y-M-Y, @mattcasey02, @weiji14, @Yulv-git, @xiefeifeihu, @FANG-MING, @meng976537406, @nijkah, @sudz123, @CCODING04, @SheffieldCao, @Czm369, @BIGWangYuDong, @zytx121, @jbwang1997, @chhluo, @jshilong, @RangiLyu, @hhaAndroid, @ZwwWayne

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.23.0...v2.24.0

v2.23.0

2 years ago

Highlights

New Features

  • Support Mask2Former(#6938)(#7466)(#7471)
  • Support EfficientNet (#7514)
  • Support setting data root through environment variable MMDET_DATASETS, users don't have to modify the corresponding path in config files anymore. (#7386)
  • Support setting different seeds to different ranks (#7432)
  • Update the dist_train.sh so that the script can be used to support launching multi-node training on machines without slurm (#7415)
  • Find a good recipe for fine-tuning high precision ResNet backbone pre-trained by Torchvision (#7489)

Bug Fixes

  • Fix bug in VOC unit test which removes the data directory (#7270)
  • Adjust the order of get_classes and FileClient (#7276)
  • Force the inputs of get_bboxes in yolox_head to float32 (#7324)
  • Fix misplaced arguments in LoadPanopticAnnotations (#7388)
  • Fix reduction=mean in CELoss. (#7449)
  • Update unit test of CrossEntropyCost (#7537)
  • Fix memory leaking in panpotic segmentation evaluation (#7538)
  • Fix the bug of shape broadcast in YOLOv3 (#7551)

Improvements

  • Add Chinese version of onnx2tensorrt.md (#7219)
  • Update colab tutorials (#7310)
  • Update information about Localization Distillation (#7350)
  • Add Chinese version of finetune.md (#7178)
  • Update YOLOX log for non square input (#7235)
  • Add nproc in coco_panoptic.py for panoptic quality computing (#7315)
  • Allow to set channel_order in LoadImageFromFile (#7258)
  • Take point sample related functions out of mask_point_head (#7353)
  • Add instance evaluation for coco_panoptic (#7313)
  • Enhance the robustness of analyze_logs.py (#7407)
  • Supplementary notes of sync_random_seed (#7440)
  • Update docstring of cross entropy loss (#7472)
  • Update pascal voc result (#7503)
  • We create How-to documentation to record any questions about How to xxx. In this version, we added
    • How to use Mosaic augmentation (#7507)
    • How to use backbone in mmcls (#7438)
    • How to produce and submit the prediction results of panoptic segmentation models on COCO test-dev set (#7430))

Contributors

A total of 27 developers contributed to this release. Thanks @ZwwWayne, @haofanwang, @shinya7y, @chhluo, @yangrisheng, @triple-Mu, @jbwang1997, @HikariTJU, @imflash217, @274869388, @zytx121, @matrixgame2018, @jamiechoi1995, @BIGWangYuDong, @JingweiZhang12, @Xiangxu-0103, @hhaAndroid, @jshilong, @osbm, @ceroytres, @bunge-bedstraw-herb, @Youth-Got, @daavoo, @jiangyitong, @RangiLyu, @CCODING04, @yarkable

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.22.0...v2.23.0

v2.22.0

2 years ago

Breaking Changes

In order to support the visualization for Panoptic Segmentation, the num_classes can not be None when using the get_palette function to determine whether to use the panoptic palette.

Highlights

New Features

Bug Fixes

  • Fix bug for the best checkpoints can not be saved when the key_score is None (#7101)
  • Fix MixUp transform filter boxes failing case (#7080)
  • Add missing properties in SABLHead (#7091)
  • Fix bug when NaNs exist in confusion matrix (#7147)
  • Fix PALETTE AttributeError in downstream task (#7230)

Improvements

  • Speed up SimOTA matching (#7098)
  • Add Chinese translation of docs_zh-CN/tutorials/init_cfg.md (#7188)

Contributors

A total of 20 developers contributed to this release. Thanks @ZwwWayne, @hhaAndroid, @RangiLyu, @AronLin, @BIGWangYuDong, @jbwang1997, @zytx121, @chhluo, @shinya7y, @LuooChen, @dvansa, @siatwangmin, @del-zhenwu, @vikashranjan26, @haofanwang, @jamiechoi1995, @HJoonKwon, @yarkable, @zhijian-liu, @RangeKing

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.21.0...v2.22.0

v2.21.0

2 years ago

Breaking Changes

To standardize the contents in config READMEs and meta files of OpenMMLab projects, the READMEs and meta files in each config directory have been significantly changed. The template will be released in the future, for now, you can refer to the examples of README for algorithm, dataset and backbone. To align with the standard, the configs in dcn are put into to two directories named dcn and dcnv2.

New Features

  • Allow to customize colors of different classes during visualization (#6716)
  • Support CPU training (#7016)
  • Add download script of COCO, LVIS, and VOC dataset (#7015)

Bug Fixes

  • Fix weight conversion issue of RetinaNet with Swin-S (#6973)
  • Update __repr__ of Compose (#6951)
  • Fix BadZipFile Error when build docker (#6966)
  • Fix bug in non-distributed multi-gpu training/testing (#7019)
  • Fix bbox clamp in PyTorch 1.10 (#7074)
  • Relax the requirement of PALETTE in dataset wrappers (#7085)
  • Keep the same weights before reassign in the PAA head (#7032)
  • Update code demo in doc (#7092)

Improvements

  • Speed-up training by allow to set variables of multi-processing (#6974, #7036)
  • Add links of Chinese tutorials in readme (#6897)
  • Disable cv2 multiprocessing by default for acceleration (#6867)
  • Deprecate the support for "python setup.py test" (#6998)
  • Re-organize metafiles and config readmes (#7051)
  • Fix None grad problem during training TOOD by adding SigmoidGeometricMean (#7090)

Contributors

A total of 26 developers contributed to this release. Thanks @del-zhenwu, @zimoqingfeng, @srishilesh, @imyhxy, @jenhaoyang, @jliu-ac, @kimnamu, @ShengliLiu, @garvan2021, @ciusji, @DIYer22, @kimnamu, @q3394101, @zhouzaida, @gaotongxiao, @topsy404, @AntoAndGar, @jbwang1997, @nijkah, @ZwwWayne, @Czm369, @jshilong, @RangiLyu, @BIGWangYuDong, @hhaAndroid, @AronLin

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.20.0...v2.21.0

v2.20.0

2 years ago

New Features

  • Support TOOD: Task-aligned One-stage Object Detection (ICCV 2021 Oral) (#6746)
  • Support resuming from the latest checkpoint automatically (#6727)

Bug Fixes

  • Fix wrong bbox loss_weight of the PAA head (#6744)
  • Fix the padding value of gt_semantic_seg in batch collating (#6837)
  • Fix test error of lvis when using classwise (#6845)
  • Avoid BC-breaking of get_local_path (#6719)
  • Fix bug in sync_norm_hook when the BN layer does not exist (#6852)
  • Use pycocotools directly no matter what platform it is (#6838)

Improvements

  • Add unit test for SimOTA with no valid bbox (#6770)
  • Use precommit to check readme (#6802)
  • Support selecting GPU-ids in non-distributed testing time (#6781)

Contributors

A total of 16 developers contributed to this release. Thanks @ZwwWayne, @Czm369, @jshilong, @RangiLyu, @BIGWangYuDong, @hhaAndroid, @jamiechoi1995, @AronLin, @Keiku, @gkagkos, @fcakyon, @www516717402, @vansin, @zactodd, @kimnamu, @jenhaoyang

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v2.19.1...v2.20.0

v2.0.0

2 years ago

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v1.2.0...v2.0.0

v1.2.0

2 years ago

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v1.1.0...v1.2.0

v1.1.0

2 years ago

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v1.0.0...v1.1.0

v1.0.0

2 years ago

New Contributors

Full Changelog: https://github.com/open-mmlab/mmdetection/compare/v0.6.0...v1.0.0