Face Renovation Save

Official repository of the paper "HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment".

Project README

python report PWC PWCPWC





Face-Renovation

HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment

Lingbo Yang, Chang Liu, Pan Wang, Shanshe Wang, Peiran Ren, Siwei Ma, Wen Gao

Project | arXiv | ACM link| Supplementary Material

Update 20201026: Pretrained checkpoints released to facilitate reproduction.

Update 20200911: Please find video restoration results at this repo!

Update: This paper is accepted at ACM Multimedia 2020.

Stunner

Contents

  1. Usage
  2. Benchmark
  3. Remarks
  4. License
  5. Citation
  6. Acknowledgements

Usage

Environment

  • Ubuntu/CentOS
  • PyTorch 1.0+
  • CUDA 10.1
  • python packages: opencv-python, tqdm,
  • Data augmentation tool: imgaug
  • Face Recognition Toolkit for evaluation
  • tqdm to make you less anxious when testing:)

Dataset Preparation

Download FFHQ, resize to 512x512 and split id [65000, 70000) for testing. We only use first 10000 images for training, which takes 2~3 days on a P100 GPU, training with full FFHQ is possible, but could take weeks.

After that, run degrade.py to acquire paired images for training. You need to specify the degradation type and input root in the script first.

Configurations

The configurations is stored in options/config_hifacegan.py, the options should be self-explanatory, but feel free to leave an issue anytime.

Training and Testing

python train.py            # A fool-proof training script
python test.py             # Test on synthetic dataset
python test_nogt.py        # Test on real-world images
python two_source_test.py  # Visualization of Fig 5

Pretrained Models

Download, unzip and put under ./checkpoints. Then change names in configuration file accordingly.

BaiduNetDisk: Extraction code:cxp0

YandexDisk

Note:

  • These checkpoints works best on synthetic degradation prescribed in degrade.py, don't expect them to handle real-world LQ face images. You can try to fine-tune them with additional collected samples though.
  • There are two face_renov checkpoints trained under different degradation mixtures. Unfortunately I've forgot which one I used for our paper, so just try both and select the better one. Also, this could give you a hint about how our model behaves under a different degradation setting:)
  • You may need to set netG=lipspade and ngf=48 inside the configuration file. In case of loading failure, don't hesitate to submit a issue or email me.

Evaluation

Please find in metrics_package folder:

  • main.py: GPU-based PSNR, SSIM, MS-SSIM, FID
  • face_dist.py: CPU-based face embedding distance(FED) and landmark localization error (LLE).
  • PerceptualSimilarity\main.py: GPU-based LPIPS
  • niqe\niqe.py: NIQE, CPU-based, no reference

Note:

  • Read the scripts and modify result folder path(s) before testing (do not add / in the end), the results will be displayed on screen and saved in txt.
  • At least 10GB is required for main.py. If this is too heavy for you, reducebs=250 at line 79
  • Initializing Inception V3 Model for FID could take several minutes, just be patient. If you find a solution, please submit a PR.
  • By default face_dist.py script runs with 8 parallel subprocesses, which could cause error on certain environments. In that case, just disable the multiprocessing and replace with a for loop (This would take 2~3 hours for 5k images, you may want to wrap the loop in tqdm to reduce your anxiety).

Benchmark

Please refer to benchmark.md for benchmark experimental settings and performance comparison.

Memory Cost The default model is designed to fit in a P100 card with 16 GB memory. For Titan-X or 1080Ti card with 12 GB memory, you can reduce ngf=48, or further turn batchSize=1 without significant performance drop.

Inference Speed Currently the inference script is single-threaded which runs at 5fps. To further increase the inference speed, possible options are using multi-thread dataloader, batch inference, and combine normalization and convolution operations.

Remarks

Face Renovation is not designed to create a perfect specimen OUT OF you, but to bring out the best WITHIN you.

The Philosophy of Face Renovation | Understanding of HiFaceGAN

License

Copyright © 2020, Alibaba Group. All rights reserved. This code is intended for academic and educational use only, any commercial usage without authorization is strictly prohibited.

Citation

Please kindly cite our paper when using this project for your research.

@article{Yang2020HiFaceGANFR,
  title={HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment},
  author={Lingbo Yang and C. Liu and P. Wang and Shanshe Wang and P. Ren and Siwei Ma and W. Gao},
  journal={Proceedings of the 28th ACM International Conference on Multimedia},
  year={2020}
}

Acknowledgements

The replenishment module borrows the implementation of SPADE.

Open Source Agenda is not affiliated with "Face Renovation" Project. README Source: Lotayou/Face-Renovation
Stars
282
Open Issues
18
Last Commit
3 years ago
License

Open Source Agenda Badge

Open Source Agenda Rating