YOLOv4 object detector using TensorRT engine
This package contains the yolov4_trt_node that performs the inference using NVIDIA's TensorRT engine
This package works for both YOLOv3 and YOLOv4. Do change the commands accordingly, corresponding to the YOLO model used.
Install pycuda (takes awhile)
$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/dependencies
$ ./install_pycuda.sh
Install Protobuf (takes awhile)
$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/dependencies
$ ./install_protobuf-3.8.0.sh
Install onnx (depends on Protobuf above)
$ sudo pip3 install onnx==1.4.1
$ cd ~/catkin_ws && catkin_make
$ source devel/setup.bash
$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/plugins
$ make
This will generate a libyolo_layer.so file
$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/yolo
** Please name the yolov4.weights and yolov4.cfg file as follows:
Run the conversion script to convert to TensorRT engine file
$ ./convert_yolo_trt
$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/utils
$ vim yolo_classes.py
$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/launch
yolov3_trt.launch
: change the topic_name
yolov4_trt.launch
: change the topic_name
video_source.launch
: change the input format (refer to this Link
Note: Run the launch files separately in different terminals
# For csi input
$ roslaunch yolov4_trt_ros video_source.launch input:=csi://0
# For video input
$ roslaunch yolov4_trt_ros video_source.launch input:=/path_to_video/video.mp4
# For USB camera
$ roslaunch yolov4_trt_ros video_source.launch input:=v4l2://0
# For YOLOv3 (single input)
$ roslaunch yolov4_trt_ros yolov3_trt.launch
# For YOLOv4 (single input)
$ roslaunch yolov4_trt_ros yolov4_trt.launch
# For YOLOv4 (multiple input)
$ roslaunch yolov4_trt_ros yolov4_trt_batch.launch
$ cd /usr/bin/
$ sudo ./nvpmodel -m 0 # Enable 2 Denver CPU
$ sudo ./jetson_clock # Maximise CPU/GPU performance
str model = "yolov3" or "yolov4"
str model_path = "/abs_path_to_model/"
int input_shape = 288/416/608
int category_num = 80
double conf_th = 0.5
bool show_img = True
Default Input FPS from CSI camera = 30.0
# In line 359, change this line
mOptions.frameRate = 15
# To desired frame_rate
mOptions.frameRate = desired_frame_rate
Model | Hardware | FPS | Inference Time (ms) |
---|---|---|---|
Yolov4-416 | Xavier AGX | 40.0 | 0.025 |
Yolov4-416 | Jetson Tx2 | 16.0 | 0.0625 |
Many thanks for his project with tensorrt samples. I have referenced his source code and adapted it to ROS for robotics applications.
I also used the pycuda and protobuf installation script from his project
Those code are under MIT License
Many thanks for his work on the Jetson Inference with ROS. I have used his video_source input from his project for capturing video inputs.
Those code are under NVIDIA License