[ICCV 2023] You Only Look at One Partial Sequence
Yuxin Fang1 *, Shusheng Yang1 *, Shijie Wang1 *, Yixiao Ge2, Ying Shan2, Xinggang Wang1 :email:,
1 School of EIC, HUST, 2 ARC Lab, Tencent PCG.
(*) equal contribution, (:email:) corresponding author.
ICCV 2023 [paper]
19 May, 2022
: We update our preprint with stronger results and more analysis. Code & models are also updated in the main
branch. For our previous results (code & models), please refer to the v1.0.0
branch.
6 Apr, 2022
: Code & models are released!
This repo provides code and pretrained models for MIMDet (Masked Image Modeling for Detection).
Model | Sample Ratio | Schedule | Aug | Box AP | Mask AP | #params | config | model / log |
---|---|---|---|---|---|---|---|---|
MIMDet-ViT-B | 0.5 | 3x | [480-800, 1333] w/crop | 51.7 | 46.2 | 127.96M | config | model / log |
MIMDet-ViT-L | 0.5 | 3x | [480-800, 1333] w/crop | 54.3 | 48.2 | 349.33M | config | model / log |
Benchmarking-ViT-B | - | 25ep | [1024, 1024] LSJ(0.1-2) | 48.0 | 43.0 | 118.67M | config | model / log |
Benchmarking-ViT-B | - | 50ep | [1024, 1024] LSJ(0.1-2) | 50.2 | 44.9 | 118.67M | config | model / log |
Benchmarking-ViT-B | - | 100ep | [1024, 1024] LSJ(0.1-2) | 50.4 | 44.9 | 118.67M | config | model / log |
Notes:
git clone https://github.com/hustvl/MIMDet.git
cd MIMDet
conda create -n mimdet python=3.9
conda activate mimdet
torch==1.9.0
and torchvision==0.10.0
Detectron2==0.6
, follow d2 doc.timm==0.4.12
, follow timm doc.einops
, follow einops repo.COCO
dataset, follow d2 doc.MIMDet is built upon detectron2
, so please organize dataset directory in detectron2's manner. We refer users to detectron2
for detailed instructions. The overall hierachical structure is illustrated as following:
MIMDet
├── datasets
│ ├── coco
│ │ ├── annotations
│ │ ├── train2017
│ │ ├── val2017
│ │ ├── test2017
│ ├── ...
├── ...
Download the full MAE pretrained (including the decoder) ViT-B Model and ViT-L Model checkpoint. See MAE repo-issues-8.
# single-machine training
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> mae_checkpoint.path=<MAE_MODEL_PATH>
# multi-machine training
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --num-machines <MACHINE_NUM> --master_addr <MASTER_ADDR> --master_port <MASTER_PORT> mae_checkpoint.path=<MAE_MODEL_PATH>
# inference
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --eval-only train.init_checkpoint=<MODEL_PATH>
# inference with 100% sample ratio (please refer to our paper for detailed analysis)
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --eval-only train.init_checkpoint=<MODEL_PATH> model.backbone.bottom_up.sample_ratio=1.0
This project is based on MAE, Detectron2 and timm. Thanks for their wonderful works.
MIMDet is released under the MIT License.
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
@article{MIMDet,
title={Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection},
author={Fang, Yuxin and Yang, Shusheng and Wang, Shijie and Ge, Yixiao and Shan, Ying and Wang, Xinggang},
journal={arXiv preprint arXiv:2204.02964},
year={2022}
}