AutoTrackAnything is a universal, flexible and interactive tool for insane automatic object tracking over thousands of frames. It is developed upon XMem, Yolov8 and MobileSAM (Segment Anything), can track anything which detect Yolov8.
It's multipurpose tracking approach using Yolov8, SAM, xMem and my wrapper and algorithms.
In this case it's uses for person detection, but you can simply change task (see point 4).
And I use keypoints confidence for adding good visible persons (you can remove it later).
It's not a super-approach, so maybe you will need to set hyperparameters or train models for your task. But it's very useful and easy to start project, that you can use for multiple object tracking.
On my task (person tracking) it works better that other approaches: MOT, ByteTrack, DeepSort, Kalman FIlter etc.
pip3 install -r requirements.txt
Note: if you are using a GPU, then you need to install torch with CUDA with the GPU-enabled version. Otherwise, the processor will be used.
python3 download_models.py
config.py
(can skip)pose-estimation.py
.You can simply run it on your video with command:
python3 tracking.py --video_path=INPUT_VIDEO_PATH.mp4 --width=1280 \
--height=768 --frames_to_propagate=600 --output_video_path=RESULT_VIDEO_PATH.mp4 --device=0 \
--person_conf=0.6 --kpts_conf=0.4 --iou_thresh=0.15 --yolo_every=2 --output_path=OUTPUT_CSV_PATH.csv
You can also set frames_to_propagate
: num of frames, which you want to process.
After that you can get output video with animations (detection, tracking results) and csv-file with all information about objects in every frame.
I wrote custom Precision, Recall and F1Score calculation for tracking task. It compares bboxes positions and their ids.
⚠️ Please use it with labels from CVAT dataset exporting (the structure is described below)
You can simply run it on your labeled video or frames with command:
python3 metrics_counting.py --labels_dir=LABELS_DIR_PATH --width=1280 \
--height=768 --device=0 --person_conf=0.6 --kpts_conf=0.4\
--iou_thresh=0.15 --print_every=10
Note that structure of LABELS_DIR_PATH should be:
LABELS_DIR_PATH
|- first_dir
|- obj_train_data
|- frame0.jpg
|- frame0.txt
|- frame1.jpg
|- frame1.txt
...
|- second_dir
...
Example. My LABELS_DIR_PATH is test_files
:
Labels: Yolo
(directory with txt files corresponding to frames, format of example.txt:
0 0.265682 0.430208 0.057479 0.279509
1 0.483107 0.486296 0.069411 0.337759
...
5 0.743799 0.467407 0.060016 0.289593
It's simply to change pose-estimation.py
and use different detection models (or your custom trained model):
get_filtered_bboxes_by_confidence
method return list with bboxes from your modelPlease star and cite this repo if you find project useful!
@software{AutoTrackAnything,
author = {Roman Lyskov},
title = {AutoTrackAnything},
year = {2024},
url = {https://github.com/licksylick/AutoTrackAnything},
license = {MIT}
}
@inproceedings{cheng2022xmem,
title={{XMem}: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model},
author={Cheng, Ho Kei and Alexander G. Schwing},
booktitle={ECCV},
year={2022}
}
@article{mobile_sam,
title={Faster Segment Anything: Towards Lightweight SAM for Mobile Applications},
author={Zhang, Chaoning and Han, Dongshen and Qiao, Yu and Kim, Jung Uk and Bae, Sung-Ho and Lee, Seungkyu and Hong, Choong Seon},
journal={arXiv preprint arXiv:2306.14289},
year={2023}
}
@software{yolov8_ultralytics,
author = {Glenn Jocher and Ayush Chaurasia and Jing Qiu},
title = {Ultralytics YOLOv8},
version = {8.0.0},
year = {2023},
url = {https://github.com/ultralytics/ultralytics},
orcid = {0000-0001-5950-6979, 0000-0002-7603-6750, 0000-0003-3783-7069},
license = {AGPL-3.0}
}