Chainer implementation of Pose Proposal Networks
Copyright (c) 2018 Idein Inc. & Aisin Seiki Co., Ltd. All rights reserved.
This project is licensed under the terms of the license.
Download
page. Then download and extract both Images (12.9 GB)
and Annotations (12.5 MB)
at ~/work/dataset/mpii_dataset
for example.mpii.json
We need decode mpii_human_pose_v1_u12_1.mat
to generate mpii.json
. This will be used on training or evaluating test dataset of MPII.
$ sudo docker run --rm -v $(pwd):/work -v path/to/dataset:mpii_dataset -w /work idein/chainer:4.5.0 python3 convert_mpii_dataset.py mpii_dataset/mpii_human_pose_v1_u12_2/mpii_human_pose_v1_u12_1.mat mpii_dataset/mpii.json
It will generate mpii.json
at path/to/dataset
. Where path/to/dataset
is the root directory of MPII dataset, for example, ~/work/dataset/mpii_dataset
. For those who hesitate to use Docker, you may edit config.ini
as necessary.
Dataset
-> download
. Then download and extract 2017 Train images [118K/18GB]
, 2017 Val images [5K/1GB]
and 2017 Train/Val annotations [241MB]
at ~/work/dataset/coco_dataset:/coco_dataset
for example.OK let's begin!
$ cat begin_train.sh
cat config.ini
docker run --rm \
-v $(pwd):/work \
-v ~/work/dataset/mpii_dataset:/mpii_dataset \
-v ~/work/dataset/coco_dataset:/coco_dataset \
--name ppn_idein \
-w /work \
idein/chainer:5.1.0 \
python3 train.py
$ sudo bash begin_train.sh
--runtime=nvidia
maybe require for some environment.path/to/dataset
on host machine.config.ini
as follow:before
# parts of config.ini
[dataset]
type = mpii
after
# parts of config.ini
[dataset]
type = coco
config.ini
as follow:before
[model_param]
model_name = mv2
after
[model_param]
# you may also choice resnet34 and resnet50
model_name = resnet18
$ sudo bash run_predict.sh ./trained
config.ini
as follow:[predict]
# If `False` is set, hide bbox of annotation other than human instance.
visbbox = True
# detection_thresh
detection_thresh = 0.15
# ignore human its num of keypoints is less than min_num_keypoints
min_num_keypoints= 1
We tested on an Ubuntu 16.04 machine with GPU GTX1080(Ti)
We will build OpenCV from source to visualize the result on GUI.
$ cd docker/gpu
$ cat build.sh
docker build -t ppn .
$ sudo bash build.sh
Here is an result of ResNet18 trained with COCO running on laptop PC.
Set your USB camera that can recognize from OpenCV.
Run video.py
$ python video.py ./trained
or
$ sudo bash run_video.sh ./trained
high_speed.py
instead of video.py
Please cite the paper in your publications if it helps your research:
@InProceedings{Sekii_2018_ECCV,
author = {Sekii, Taiki},
title = {Pose Proposal Networks},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}