EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]
Models are trained on ImageNet-1K and deployed on iPhone 12 with CoreMLTools to get latency.
Rethinking Vision Transformers for MobileNet Size and Speed
Yanyu Li1,2, Ju Hu1, Yang Wen1, Georgios Evangelidis1, Kamyar Salahi3,
Yanzhi Wang2, Sergey Tulyakov1, Jian Ren1
1Snap Inc., 2Northeastern University, 3UC Berkeley
efficientformerv2_s0
, efficientformerv2_s1
, efficientformerv2_s2
and efficientformerv2_l
.python toolbox.py --model efficientformerv2_l --ckpt weights/eformer_l_450.pth --onnx --coreml
Models are trained on ImageNet-1K and measured by iPhone 12 with CoreMLTools to get latency.
EfficientFormer: Vision Transformers at MobileNet Speed
Yanyu Li1,2, Genge Yuan1,2, Yang Wen1, Eric Hu1, Georgios Evangelidis1,
Sergey Tulyakov1, Yanzhi Wang2, Jian Ren1
1Snap Inc., 2Northeastern University
Model | Top-1 (300/450) | #params | MACs | Latency | ckpt | ONNX | CoreML |
---|---|---|---|---|---|---|---|
EfficientFormerV2-S0 | 75.7 / 76.2 | 3.5M | 0.40B | 0.9ms | S0 | S0 | S0 |
EfficientFormerV2-S1 | 79.0 / 79.7 | 6.1M | 0.65B | 1.1ms | S1 | S1 | S1 |
EfficientFormerV2-S2 | 81.6 / 82.0 | 12.6M | 1.25B | 1.6ms | S2 | S2 | S2 |
EfficientFormerV2-L | 83.3 / 83.5 | 26.1M | 2.56B | 2.7ms | L | L | L |
Model | Top-1 Acc. | Latency | Pytorch Checkpoint | CoreML | ONNX |
---|---|---|---|---|---|
EfficientFormer-L1 | 79.2 (80.2) | 1.6ms | L1-300 (L1-1000) | L1 | L1 |
EfficientFormer-L3 | 82.4 | 3.0ms | L3 | L3 | L3 |
EfficientFormer-L7 | 83.3 | 7.0ms | L7 | L7 | L7 |
The latency reported in EffcientFormerV2 for iPhone 12 (iOS 16) uses the benchmark tool from XCode 14.
For EffcientFormerV1, we use the coreml-performance. Thanks for the nice-implemented latency measurement!
Tips: MacOS+XCode and a mobile device (iPhone 12) are needed to reproduce the reported speed.
conda
virtual environment is recommended.
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
pip install timm
pip install submitit
Download and extract ImageNet train and val images from http://image-net.org/. The training and validation data are expected to be in the train
folder and val
folder respectively:
|-- /path/to/imagenet/
|-- train
|-- val
We provide an example training script dist_train.sh
using PyTorch distributed data parallel (DDP).
To train EfficientFormer-L1 on an 8-GPU machine:
sh dist_train.sh efficientformer_l1 8
Tips: specify your data path and experiment name in the script!
On a Slurm-managed cluster, multi-node training can be launched through submitit, for example,
sh slurm_train.sh efficientformer_l1
Tips: specify GPUs/CPUs/memory per node in the script based on your resource!
We provide an example test script dist_test.sh
using PyTorch distributed data parallel (DDP).
For example, to test EfficientFormer-L1 on an 8-GPU machine:
sh dist_test.sh efficientformer_l1 8 weights/efficientformer_l1_300d.pth
Object Detection and Instance Segmentation
Semantic Segmentation
Classification (ImageNet) code base is partly built with LeViT and PoolFormer.
The detection and segmentation pipeline is from MMCV (MMDetection and MMSegmentation).
Thanks for the great implementations!
If our code or models help your work, please cite EfficientFormer (NeurIPs 2022) and EfficientFormerV2 (ICCV 2023):
@article{li2022efficientformer,
title={Efficientformer: Vision transformers at mobilenet speed},
author={Li, Yanyu and Yuan, Geng and Wen, Yang and Hu, Ju and Evangelidis, Georgios and Tulyakov, Sergey and Wang, Yanzhi and Ren, Jian},
journal={Advances in Neural Information Processing Systems},
volume={35},
pages={12934--12949},
year={2022}
}
@inproceedings{li2022rethinking,
title={Rethinking Vision Transformers for MobileNet Size and Speed},
author={Li, Yanyu and Hu, Ju and Wen, Yang and Evangelidis, Georgios and Salahi, Kamyar and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian},
booktitle={Proceedings of the IEEE international conference on computer vision},
year={2023}
}