This is a repo for training and implementing the mobilenet-ssd v2 to tflite with c++ on x86 and arm64
System Environment:
System: Ubuntu 18.04
Opencv: opencv 3.2
Tensorflow: 1.13.1
Instructions:
item {
id: 1
name: 'person'
}
item {
id: 2
name: 'car'
}
Convert yout images and annotations to tfrecord by models datatool api.
Commands:
cd models/research/
export PYTHONPATH=$PYTHONPATH:path to /models/research/:path to/models/research/slim
protoc object_detection/protos/*.proto --python_out=.
python object_detection/dataset_tools/create_coco_tf_record.py --image_dir=/path_to/img/ --ann_dir=/path_to/ann/ --output_path=/path_to/train.record --label_map_path=/path_to/demo/label.pbtxt
train
input_path: "/path_to/train.record"
test
input_path: "/path_to/train.record"
cd models/research/
export PYTHONPATH=$PYTHONPATH: path to /models/research/:path to/models/research/slim
protoc object_detection/protos/*.proto --python_out=.
python object_detection/legacy/train.py --train_dir=path to /models/research/object_detection/mobilenet_ssd_v2_train/CP/ --pipeline_config_path=path to/models/research/object_detection/mobilenet_ssd_v2_train/pipeline.config
tensorboard --logdir=/path_to/mobilenet_ssd_v2_train/CP
python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=/path_to/pipleline.config --trained_checkpoint_prefix=/path_to/mobilenet_ssd_v2_train/CP/model.ckpt-xxxxxx --output_directory=/path_to/mobilenet_ssd_v2_train/IG/
{
"1": "person"
}
{
"2": "car"
}
python demo.py PATH_TO_FROZEN_GRAPH cam_dir js_file
python object_detection/export_tflite_ssd_graph.py --input_type=image_tensor --pipeline_config_path=path to/models/research/object_detection/mobilenet_ssd_v2_train/IG/pipeline.config --trained_checkpoint_prefix=path to/models/research/object_detection/mobilenet_ssd_v2_train/IG/model.ckpt --output_directory=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite --add_postprocessing_op=true
tflite_convert --output_file=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite/model.tflite --graph_def_file=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite/tflite_graph.pb --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --input_shape=1,300,300,3 --allow_custom_ops --output_format=TFLITE --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127
tflite::FlatBufferModel::BuildFromFile("../model.tflite");
Modify the labelmap.txt with you annotation if you fine tuned your model.
Run demo.cpp on x86 unbuntu, make sure opencv and bazel are installed.
bazel build -c opt //tensorflow/lite:libtensorflowlite.so --fat_apk_cpu=arm64-v8a
mkdir build
cd build
cmake ..
make -j
./demo
Run demo.cpp on arm64-v8a ubuntu.
mkdir build
cd build
cmake ..
make -j
./demo
If there is a flatbuffers error, you should build flatbuffers on your desktop, and use its header files and .a lib file, put and replace them into tensorflow_object_detection_tflite/include and tensorflow_object_detection_tflite/lib, respectively. You can check here to know how to build. https://github.com/google/flatbuffers/issues/5569#issuecomment-543777629
Result image