This source code implements our ECCV paper "task-conditioned domain adaptation for pedestrian detection in thermal imagery".
Noted that all of these instructions for Linux environment.
For all of following commands, if command with [...] will be an option, you can use your parameter or leave there to use default paramaters above.
Detect bounding box result on image(s) or video by parameter: image file or folder with all images or video file. The result will appear with the same name + 'predicted'
python detect.py image/video/folder
Example:
python detect.py thermal_kaist.png
Evaluation mean Average Precision (mAP) as well as Log Average Miss Rate (LAMR) of the detector over the test set. Noted that, Log Average Miss Rate and Precision on reasonable setting (daytime, nighttime, and day & night) is the standard evaluation of the state-of-the-art on KAIST dataset.
python evaluation.py [weightfile]
Given the folder of images with its annotation. Drawing bounding box on every image with correct detection (blue boxes), wrong detection (red boxes) and miss detection (green boxes)
python drawBBxs.py imagefolder
There is an folder 'kaist_examples' for you to draw bounding box. Noted that, if you want to detect, the folder must contain only images (not contain any annotation files).
Before training on KAIST or your own dataset, you should prepare some steps as follow:
Then you can run experiments.
python train.py [-x y]
With -x and y as follow:
For example, if you want to train from yolov3 weight for a maximum of 100 epoch, you have:
python train.py -w weights/yolov3.weights -e 100
Weight and model files are saved in a backup folder at each epoch, and log of training saved in backup/savelog.txt
You should notice that you can control everything in train.py
You can download other weights file for pre-trained such as: kaist_visible_detector.weights or kaist_thermal_detector.weights or ours best weight kaist detector augmented with GANs model. kaist_mixing80_20.weights . Remember to place on 'weights' directory.
See the loss curve during training, also precision, recall curve of validation set for every epoch. Noted that, validation set is splitted automatically from the training set during training with 10%.
python seeloss.py
for task-conditioned network (TC_Det)
python seeloss_condition.py
Plotting the Log Average Miss Rate (LAMR) and Average Precision for both Ours ablation studies and state-of-the-art multispectral results. You will see image files of plot on this repository.
python plot_LAMR.py
Noted that, before plotting results, check all result .JSON files in the results/ablation/*.
If you want to plot the comparison with multispectral state-of-the-art results, (1) Download Multispectral SOTA results and extract to the directory results/SOTA. (2) In file Plot_LAMR.py, comment Ablation studies part (lines 80 -> 84), and comment out SOTA part (lines 45 -> 55, and lines 90 -> 100).
If you want to plot your result together. (1) Evaluation your detector file. (2)Then you will see detection_result.JSON file in results/ folder (you can rename it or not). (3) In Plot_LAMR.py file, adding your detector at after the line 55, reference to your *.JSON file. (4) Add the short name (line 85) and then run this Plot_LAMR.py file.
Please check it
python demo.py
Other numbers:
Reasonable | Day & Night | Day | Night |
---|---|---|---|
Precision | 82.87% | 77.16% | 93.82% |
Miss Rate | 27.11% | 34.81% | 10.31% |
The paper is available here Task-conditioned Domain Adaptation for Pedestrian Detection in Thermal Imagery
We really hope this repository is useful for you. Please cite our paper as
@inproceedings{KieuECCV2020taskconditioned,
Author = {Kieu, My and Bagdanov, Andrew D and Bertini, Marco and Del Bimbo, Alberto},
Booktitle = {Proc. of European Conference on Computer Vision (ECCV)},
Title = {Task-conditioned Domain Adaptation for Pedestrian Detection in Thermal Imagery},
Year = {2020}
}
If you have any comment or question about this repository, please leave it in Issues.
Other contribution, please contact me by email: [email protected].
Thank you so much for your interest in our work.