Official PyTorch implementation of GDWCT (CVPR 2019, oral)
This repository provides the official code of GDWCT, and it is written in PyTorch.
Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation (link)
Wonwoong Cho1), Sungha Choi1,2), David Keetae Park1), Inkyu Shin3), Jaegul Choo1)
1)Korea University, 2)LG Electronics, 3)Hanyang University
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 (Oral)
git clone https://github.com/WonwoongCho/GDWCT.git
cd GDWCT
bash download.sh celeba
We wish to directly provide the data we used in the paper, however it cannot be allowed because the data is preprocessed. We apologize for this.
Settings and hyperparameters are set in the config.yaml file. Please refer to specific descriptions provided in the file as comments. After setting, GDWCT can be trained or tested by the following script (NOTE: the values of 'MODE', 'LOAD_MODEL', and 'START' should be changed if a user want to test the model.):
python run.py
Run the script if you need to download pretrained models (Smile <=> Non-Smile), (Bangs <=> Non-Bangs). The pretrained models will be downloaded and unzipped into ./pretrained_models/
directory.
bash download.sh pretrained
In order to test the pretrained models, please change several options in the config file, as described in the script below.
If the name of a pretrained model is G_A_CelebA_Bangs_G4_320000.pth,
N_GROUP: 4
SAVE_NAME: CelebA_Bangs_G4
MODEL_SAVE_PATH: pretrained_models/
START: 320000
LOAD_MODEL: True
MODE: test
Please cite our paper if our work including this code is helpful for your research.
@InProceedings{GDWCT2019,
author = {Wonwoong Cho, Sungha Choi, David Keetae Park, Inkyu Shin, Jaegul Choo},
title = {Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}