MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning
This repo is the offcial implementation of the paper MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
@article{MixMAE,
author = {Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li},
journal = {arXiv:2205.13137},
title = {MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers},
year = {2022},
}
Models | Params (M) | FLOPs (G) | Pretrain Epochs | Top-1 Acc. | Pretrain_ckpt | Finetune_ckpt |
---|---|---|---|---|---|---|
Swin-B/W14 | 88 | 16.3 | 600 | 85.1 | base_600ep | base_600ep_ft |
Swin-B/W16-384x384 | 89.6 | 52.6 | 600 | 86.3 | base_600ep | base_600ep_ft_384x384 |
Swin-L/W14 | 197 | 35.9 | 600 | 85.9 | large_600ep | large_600ep_ft |
Swin-L/W16-384x384 | 199 | 112 | 600 | 86.9 | large_600ep | large_600ep_ft_384x384 |
We use Slurm for multi-node distributed pretraining and finetuning.
sh exp/base_600ep/pretrain.sh partition 16 /path/to/imagenet
sh exp/base_600ep/finetune.sh partition 8 /path/to/imagenet