StyleGAN2 Distillation for Feed-forward Image Manipulation
paper : StyleGAN2 Distillation for Feed-forward Image Manipulation (https://arxiv.org/abs/2003.03581)
official release : github(https://github.com/EvgenyKashin/stylegan2-distillation)
pytorch : 1.4.0
python : 3.7.4
Inference takes around 0.25 ~ 0.4 seconds per 1024 x 1024 size single image
Result after trained on 5000 pair synthetic dataset / 150 epoch / V100 GPU
python misc_imdb_preprocessing.py
python all_in_one.py --attribute [gender/age] --phase train --db_root [imdb_dataset_path]
python all_in_one.py --attribute [gender/age] --phase test --db_root [imdb_dataset_path]
After converting StyleGAN2-pytorch checkpoint, run this code
python generate_distillation.py --phase set --attribute [gender/age]
python generate_distillation.py --phase multiple --attribute [gender/age]
python generate_distillation.py --phase pair --attribute [gender/age] --synthetic_path [target_path]
With synthetic dataset generated above, train pix2pixhd
python train.py --name [name] --label_nc 0 --no_instance --dataroot [synthetic data path] --reverse [False for forward, True for backward] $@
python test.py --name [name] --reverse [False for forward, True for backward] --netG global --ngf 64 --label_nc 0 --resize_or_crop none --no_instance --dataroot [synthetic data path] --which_epoch latest
Pytorch-StyleGAN2 code slightly changed based on rosinality, Pytorch-Pix2PixHD code borrowed from NVIDIA