GUI based on the python api of openpose in windows using cuda10 and cudnn7. Support body , hand, face keypoints estimation and data saving. Realtime gesture recognition is realized through two-layer neural network based on the skeleton collected from the gui.
Install cuda10 and [cudnn7]. Or here is my BaiduDisk password:4685
.
Run models/getModels.bat
to get model. Or here is my BaiduDisk password:rmkn
and put models in the corresponding position
Download 3rd-party dlls from my BaiduDisk password:64sg
and unzip in your 3rdparty folder.
Body
, Hand
, Face
).gesture recognition
can only be used when the hand checkbox is on. My model is only a 2 layers MLP, and the data was collected with front camera and left hand. So it may have many limitations. Your can train your own model and replace it.action recognition
emotion recognition
You will get a output folder like the following figure. The count is set to 0 when the program begins and will automatically increase with the number of images saved.
data_body = np.load('body/0001_body.npy')
data_hand = np.load('hand/0001_hand.npy')
data_face = np.load('face/0001_face.npy')
print(data_body.shape)
# (1, 25, 3) : person_num x kep_points_num x x_y_scroe
print(data_hand.shape)
# (2, 1, 21, 3) : left_right x person_num x kep_points_num x x_y_scroe
print(data_face.shape)
# (1, 70, 3) : person_num x kep_points_num x x_y_scroe
Collect data and make folder for every class.
run python train.py -p C:\Users\Administrator\Desktop\自建数据集\hand
to train your model(replace path with your dataset path)