NAS FCOS Save Abandoned

NAS-FCOS: Fast Neural Architecture Search for Object Detection (CVPR 2020)

Project README

NAS-FCOS: Fast Neural Architecture Search for Object Detection

This project hosts the train and inference code with pretrained model for implementing the NAS-FCOS algorithm for object detection, as presented in our paper:

NAS-FCOS: Fast Neural Architecture Search for Object Detection;
Ning Wang, Yang Gao, Hao Chen, Peng Wang, Zhi Tian, Chunhua Shen;
In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020.

The full paper is available at: NAS-FCOS Paper.

Updates

  • News: Accepted by CVPR 2020. (24/02/2020)
  • Upload solver module to support self training. (06/02/2020)
  • Support RetinaNet detector in NAS module (pretrained model coming soon). (06/02/2020)
  • Update NAS head module, config files and pretrained model links. (07/01/2020)

Required hardware

We use 4 Nvidia V100 GPUs.

Installation

This NAS-FCOS implementation is based on maskrcnn-benchmark. Therefore the installation is the same as original maskrcnn-benchmark.

Please check INSTALL.md for installation instructions. You may also want to see the original README.md of maskrcnn-benchmark.

Train

The train command line on coco train:

python -m torch.distributed.launch \
    --nproc_per_node=4 \
    --master_port=1213 \
    tools/train_net.py --config-file "configs/search/R_50_NAS_retinanet.yaml"

Inference

The inference command line on coco minival split:

python -m torch.distributed.launch \
    --nproc_per_node=1 \
    tools/test_net.py --config-file "configs/search/R_50_NAS_densebox.yaml"

Please note that:

  1. If your model's name is different, please replace models/R-50-NAS.pth with your own.
  2. If you enounter out-of-memory error, please try to reduce TEST.IMS_PER_BATCH to 1.
  3. If you want to evaluate a different model, please change --config-file to its config file (in configs/search) and MODEL.WEIGHT to its weights file.

For your convenience, we provide the following trained models (more models are coming soon).

Model Multi-scale training AP (minival) AP (test-dev) Link Fetch Code
Mobile_NAS No 32.6 33.1 download 3dm9
Mobile_NAS_head No 34.4 34.7 download -
R_50_NAS No 38.5 38.9 download f88u
R_50_NAS_head No 39.5 39.8 download -
R_101_NAS Yes 42.1 42.5 download euuz
R_101_NAS_head Yes 42.8 43.0 download -
R_101_X_32x8d_NAS Yes 43.4 43.7 download 4cci

Attention: If the above model link cannot be downloaded normally, please refer to the link below. Mobile_NAS, Mobile_NAS_head, R_50_NAS, R_50_NAS_head, R_101_NAS, R_101_NAS_head R_101_X_32x8d_NAS

All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..

Contributing to the project

Any pull requests or issues are welcome.

Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows.

@InProceedings{Wang_2020_CVPR,
    author = {Wang, Ning and Gao, Yang and Chen, Hao and Wang, Peng and Tian, Zhi and Shen, Chunhua and Zhang, Yanning},
    title = {NAS-FCOS: Fast Neural Architecture Search for Object Detection},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
}

License

For academic use, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial use, please contact the authors.

Open Source Agenda is not affiliated with "NAS FCOS" Project. README Source: Lausannen/NAS-FCOS
Stars
182
Open Issues
8
Last Commit
3 years ago
License

Open Source Agenda Badge

Open Source Agenda Rating