A PyTorch implementation of adversarial pose estimation for multi-person
This repository implements pose estimation methods in PyTorch.
The file lsp_mpii.h5 contains the annotations of MPII, LSP training data and LSP test data.
Place LSP, MPII images in data/LSP/images
and data/mpii/images
.
Place coco annotations in data/coco/annotations
and images in data/coco/images
, as suggested by cocoapi. The file valid_id contains the image_ids used for validation.
Compile the C implementation of the associative embedding loss. Code credit umich-vl/pose-ae-train.
cd src/extensions/AE
python build.py # be sure to have visible cuda device
data
: put the training / testing data heresrc
:
models
: model definitiondatasets
: dataset definitionextensions
:
AE
: code from Associative Embedding.utils
All the other folders represents different tasks. Each contains a training script train.py
and definition of command-line options opts.py
.
hgpose
: training code for Stacked Hourglass Networks for Human Pose Estimation. Single-Person.
hgpose-ae
: training code for Associative Embedding: End-to-end Learning for Joint Detection and Grouping. Multi-Person.
COCO test compare, test on the images in valid_id
advpose
: training code for Self Adversarial Training for Human Pose Estimation. Single-Person.
advpose-ae
: training code combining advpose
with AE_loss
. Multi-Person.
advpose-ae
: Only supports single gpu. Multi-gpu training get stucked randomly. The problem seems to be caused by the AE_loss extension.