Betapose for 6D pose estimation
Please refer to our paper for detailed explanation. Arxiv Link is here.
In the following, ROOT
refers to the folder containing this README file.
Before you running this repository, please look at this one: https://github.com/sjtuytc/segmentation-driven-pose. It's much greater because it's:
DATAROOT/models
and DATAROOT/test
where DATAROOT
can be any folder you'd like to place LineMod dataset.You can skip this step since we have provided designated keypoints files in '$ROOT/1_keypoint_designator/assets/sifts/'.
$ROOT/1_keypoint_designator/
.
$ cd ROOT/1_keypoint_designator/
DATAROOT/models/obj_01.ply
) in $./assets/models/
$ sh build_and_run.sh
The output file is in $./assets/sifts/
. It's a ply file storing the 3D coordinates of designated keypoints.$ROOT/2_keypoint_annotator/
.
$ cd ROOT/2_keypoint_annotator/
$ python annotate_keypoint.py --obj_id 1 --total_kp_number 50 --output_base ROOT/3_6Dpose_estimator/data --sixd_base DATAROOT
Type the following to see the meaning of options.
$ python annotate_keypoint.py -h
```
annot_train.h5
and annot_eval.h5
. The corresponding training images are in folders train
and eval
.$ROOT/3_6Dpose_estimator/train_YOLO
.
$ cd ROOT/3_6Dpose_estimator/train_YOLO
$ make
```
./scripts
for more help.train_single.sh
or train_all.sh
to train the network.$ROOT/3_6Dpose_estimator/models/yolo/
.$ROOT/3_6Dpose_estimator/train_KPD/
$ cd ROOT/3_6Dpose_estimator/train_KPD
./src/utils/dataset/coco.py
to previously annotated dataset. Examples are given in these lines.$ python src/train.py --trainBatch 28 --expID seq5_Nov_1_1 --optMethod adam
--addDPG
option. and load the model trained after in the second step.
$ python src/train.py --trainBatch 28 --expID seq5_dpg_Nov_1_1 --optMethod adam --loadModel ./exp/coco/seq5_Nov_1_1/model_100.pkl --addDPG
$ tensorboard --logdir ./
Move back to the root of pose estimator.
$ cd ROOT/3_6Dpose_estimator/
Run the following command.
$ CUDA_VISIBLE_DEVICES=1 python3 betapose_evaluate.py --nClasses 50 --indir /01/eval --outdir examples/seq1 --sp --profile
The output json file containing predicted 6D poses will be in examples/seq1.