Training a ResNet on UMDFaces for face recognition
This repository shows how to train ResNet models in PyTorch on publicly available face recognition datasets.
conda create -n resnet-face python=2.7
and activate it: source activate resnet-face
.conda config --add channels soumith
. Then install: conda install pytorch torchvision cuda80 -c soumith
.conda install scipy Pillow tqdm scikit-learn scikit-image numpy matplotlib ipython pyyaml
.Demo to train a ResNet-50 model on the UMDFaces dataset.
dataset_path
and output_path
.
for i in {0..2}; python umd-face/run_crop_face -b $i &; done
.:small_red_triangle: TODO - takes very long, convert into shell+ImageMagick script.
model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3, 4]).cuda()
. Change these numbers depending on the number of available GPUs.watch -d nvidia-smi
to constantly monitor the multi-GPU usage from the terminal.DATASET_PATH=local/path/to/cropped/umd/faces
--model_path
or -m
command-line argument) and divided the learning rate by a factor of 10.-c 4
flag in the command. Example to train a ResNet-50 on UMDFaces dataset using config-4: run python umd-face/train_resnet_umdface.py -c 4 -d $DATASET_PATH
.python umd-face/train_resnet_umdface.py -c 5 -m ./umd-face/logs/MODEL-resnet_umdfaces_CFG-004_TIMESTAMP/model_best.pth.tar -d $DATASET_PATH
and so on for the subsequent stages../umd-face/logs
, i.e. ./umd-face/logs/MODEL-CFG-TIMESTAMP/
. Under an experiment's log folder the settings for each experiment can be viewed in config.yaml
; metrics such as the training and validation losses are updated in log.csv
.
Most of the usual settings (data augmentations, learning rates, number of epochs to train, etc.) can be customized by editing config.py
and umd-face/train_resnet_umdface.py
.LOG_FILE=umd-face/logs/MODEL-resnet_umdfaces_CFG-004_TIMESTAMP/log.csv
python -c "from utils import plot_log_csv; plot_log_csv('$LOG_FILE')"
python -c "from utils import plot_log; plot_log('$LOG_FILE')"
stage 1 | stage 2 | stage 3 |
---|---|---|
:red_circle: TODO - release pre-trained ResNet-50 on UMD-Faces :construction:
Verification demo: We have a short script, run_resnet_demo.py to demonstrate the usage of the model on a toy face verification example. The visualized output of the demo is saved in the the root directory of the project. The 3 sample images are taken from the LFW dataset.
Training a ResNet-50 model in PyTorch on the VGGFace2 dataset.
[filename, subject_id, xmin, ymin, width, height]
format (the CSV with pre-computed face crops is not yet made available).vgg-face-2/crop_face.sh
script is used to crop the face images into a separate output folder. Please look at the settings section in the script to assign correct paths depending on where the VGGFace2 data was downloaded on your local machine. This takes about a day. TODO - multi-process.
OUTPUT_PATH/train-crop
OUTPUT_PATH/val-crop
python vgg-face-2/train_resnet50_vggface_scratch.py -c 20
python vgg-face-2/train_resnet50_vggface_scratch.py -c 21 -m PATH_TO_BEST_MODEL_CFG-20
python vgg-face-2/train_resnet50_vggface_scratch.py -c 22 -m PATH_TO_BEST_MODEL_CFG-21
Instructions on how to setup and run the LFW evaluation are at lfw/README.md.
DevTest | 10 fold |
---|---|