DCFNet Pytorch Save

DCFNet: Discriminant Correlation Filters Network for Visual Tracking

Project README

DCFNet_pytorch(JCST)

[️‍πŸ”₯News️‍πŸ”₯] DCFNet is accepted in JCST. If you find DCFNet useful in your research, please consider citing:

@Article{JCST-2309-13788,
title = {DCFNet: Discriminant Correlation Filters Network for Visual Tracking},
journal = {Journal of Computer Science and Technology},
year = {2023},
issn = {1000-9000(Print) /1860-4749(Online)},
doi = {10.1007/s11390-023-3788-3},	
author = {Wei-Ming Hu and Qiang Wang and Jin Gao and Bing Li and Stephen Maybank}
}

This repository contains a Python reimplementation of the DCFNet.

Why implementation in python (PyTorch)?

  • Magical Autograd mechanism via PyTorch. Do not need to know the complicated BP.
  • Fast Fourier Transforms (FFT) supported by PyTorch 0.4.0.
  • Engineering demand.
  • Fast test speed (120 FPS on GTX 1060) and Multi-GPUs training.

Contents

  1. Requirements
  2. Test
  3. Train
  4. Citing DCFNet

Requirements

git clone --depth=1 https://github.com/foolwood/DCFNet_pytorch

Requirements for PyTorch 0.4.0 and opencv-python

conda install pytorch torchvision -c pytorch
conda install -c menpo opencv

Training data (VID) and Test dataset (OTB).

Test

cd DCFNet_pytorch/track 
ln -s /path/to/your/OTB2015 ./dataset/OTB2015
ln -s ./dataset/OTB2015 ./dataset/OTB2013
cd dataset & python gen_otb2013.py
python DCFNet.py

Train

  1. Download training data. (ILSVRC2015 VID)

    ./ILSVRC2015
    β”œβ”€β”€ Annotations
    β”‚Β Β  └── VIDβ”œβ”€β”€ a -> ./ILSVRC2015_VID_train_0000
    β”‚          β”œβ”€β”€ b -> ./ILSVRC2015_VID_train_0001
    β”‚          β”œβ”€β”€ c -> ./ILSVRC2015_VID_train_0002
    β”‚          β”œβ”€β”€ d -> ./ILSVRC2015_VID_train_0003
    β”‚          β”œβ”€β”€ e -> ./val
    β”‚          β”œβ”€β”€ ILSVRC2015_VID_train_0000
    β”‚          β”œβ”€β”€ ILSVRC2015_VID_train_0001
    β”‚          β”œβ”€β”€ ILSVRC2015_VID_train_0002
    β”‚          β”œβ”€β”€ ILSVRC2015_VID_train_0003
    β”‚          └── val
    β”œβ”€β”€ Data
    β”‚Β Β  └── VID...........same as Annotations
    └── ImageSets
        └── VID
    
  2. Prepare training data for dataloader.

    cd DCFNet_pytorch/train/dataset
    python parse_vid.py <VID_path>  # save all vid info in a single json
    python gen_snippet.py  # generate snippets
    python crop_image.py  # crop and generate a json for dataloader
    
  3. Training. (on multiple GPUs :zap: :zap: :zap: :zap:)

    cd DCFNet_pytorch/train/
    CUDA_VISIBLE_DEVICES=0,1,2,3 python train_DCFNet.py
    

Fine-tune hyper-parameter

  1. After training, you can simple test the model with default parameter.

    cd DCFNet_pytorch/track/
    python DCFNet --model ../train/work/crop_125_2.0/checkpoint.pth.tar
    
  2. Search a better hyper-parameter.

    CUDA_VISIBLE_DEVICES=0 python tune_otb.py  # run on parallel to speed up searching
    python eval_otb.py OTB2013 * 0 10000
    

Citing DCFNet

If you find DCFNet useful in your research, please consider citing:

@article{wang2017dcfnet,
  title={DCFNet: Discriminant Correlation Filters Network for Visual Tracking},
  author={Wang, Qiang and Gao, Jin and Xing, Junliang and Zhang, Mengdan and Hu, Weiming},
  journal={arXiv preprint arXiv:1704.04057},
  year={2017}
}
Open Source Agenda is not affiliated with "DCFNet Pytorch" Project. README Source: foolwood/DCFNet_pytorch
Stars
207
Open Issues
15
Last Commit
4 months ago
License
MIT

Open Source Agenda Badge

Open Source Agenda Rating