Pytorch implementation of Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation (CVPR 2020)
Pytorch implementation of Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation (CVPR 2020) [arXiv][CVF]
If you find our work useful in your research, please consider citing:
@inproceedings{ma2020deep,
title={Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation},
author={Ma, Cheng and Jiang, Zhenyu and Rao, Yongming and Lu, Jiwen and Zhou, Jie},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}
pip install numpy opencv-python tqdm imageio pandas matplotlib tensorboardX
CelebA dataset can be downloaded here. Please download and unzip the img_celeba.7z
file.
Helen dataset can be downloaded here. Please download and unzip the 5 parts of All images
.
Testing sets for CelebA and Helen can be downloaded from Google Drive or Baidu Drive (extraction code: 6qhx).
Landmark annotations for CelebA and Helen can be downloaded in the annotations
folder from Google Drive or Baidu Drive (extraction code: 6qhx).
The pretrained models can also be downloaded from the models
folder in the above links. Then please place them in ./models
.
To train a model:
cd code
python train.py -opt options/train/train_(DIC|DICGAN)_(CelebA|Helen).json
The json file will be processed by options/options.py
. Please refer to this for more details.
Before running this code, please modify option files to your own configurations including:
dataroot_HR
and dataroot_LR
paths for the data loaderinfo_path
for the annotationsLightCNN_feature.pth
) if training a GAN modelDuring training, you can use Tesorboard to monitor the losses with
tensorboard --logdir tb_logger/NAME_OF_YOUR_EXPERIMENT
To generate SR images by a model:
cd code
python test.py -opt options/test/test_(DIC|DICGAN)_(CelebA|Helen).json
results/{test_name}/{dataset_name}
. The PSNR and SSIM values will be stored in result.json
while the average results will be recorded in average_result.txt
models
folder from Google Drive or Baidu Drive (extraction code: 6qhx). Then you can modify the directory of pretrained model and LR image sets in option files and run test.py
for a quick test.To evaluate the SR results by landmark detection:
python eval_landmark.py --info_path /path/to/landmark/annotations --data_root /path/to/result/images
HG_68_CelebA.pth
from the from Google Drive or Baidu Drive (extraction code: 6qhx) and put it into the ./models
directory./path/to/result/images/landmark_result.json
and the averaged results will be in landmark_average_result.txt
.
The code is based on SRFBN and hourglass-facekeypoints-detection