YOLOv5 ONNX Runtime C++ inference code.
C++ YOLO v5 ONNX Runtime inference code for object detection.
To build the project you should run the following commands, don't forget to change ONNXRUNTIME_DIR
cmake option:
mkdir build
cd build
cmake .. -DONNXRUNTIME_DIR=path_to_onnxruntime -DCMAKE_BUILD_TYPE=Release
cmake --build .
Before running the executable you should convert your PyTorch model to ONNX if you haven't done it yet. Check the official tutorial.
On Windows
: to run the executable you should add OpenCV and ONNX Runtime libraries to your environment path or
put all needed libraries near the executable (onnxruntime.dll and opencv_world.dll).
Run from CLI:
./yolo_ort --model_path yolov5.onnx --image bus.jpg --class_names coco.names --gpu
# On Windows ./yolo_ort.exe with arguments as above
YOLOv5m onnx: