This is our ongoing PyTorch implementation for ComboGAN. Code was written by Asha Anoosheh (built upon CycleGAN)
If you use this code for your research, please cite:
ComboGAN: Unrestrained Scalability for Image Domain Translation Asha Anoosheh, Eirikur Augustsson, Radu Timofte, Luc van Gool In Arxiv, 2017.
git clone https://github.com/pytorch/vision
cd vision
python setup.py install
pip install visdom
pip install dominate
git clone https://github.com/AAnoosheh/ComboGAN.git
cd ComboGAN
Our ready datasets can be downloaded using ./datasets/download_dataset.sh <dataset_name>
.
A pretrained model for the 14-painters dataset can be found HERE. Place under ./checkpoints/
and test using the instructions below, with args --name paint14_pretrained --dataroot ./datasets/painters_14 --n_domains 14 --which_epoch 1150
.
Example running scripts can be found in the scripts
directory.
python train.py --name <experiment_name> --dataroot ./datasets/<your_dataset> --n_domains <N> --niter <num_epochs_constant_LR> --niter_decay <num_epochs_decaying_LR>
Checkpoints will be saved by default to ./checkpoints/<experiment_name>/
python train.py --continue_train --which_epoch <checkpoint_number_to_load> --name <experiment_name> --dataroot ./datasets/<your_dataset> --n_domains <N> --niter <num_epochs_constant_LR> --niter_decay <num_epochs_decaying_LR>
python test.py --phase test --name <experiment_name> --dataroot ./datasets/<your_dataset> --n_domains <N> --which_epoch <checkpoint_number_to_load> --serial_test
The test results will be saved to a html file here: ./results/<experiment_name>/<epoch_number>/index.html
.
options/train_options.py
for training-specific flags; see options/test_options.py
for test-specific flags; and see options/base_options.py
for all common flags.--dataroot
) should contain subfolders of the form train*/
and test*/
, and they are loaded in alphabetical order. (Note that a folder named train10 would be loaded before train2, and thus all checkpoints and results would be ordered accordingly.)--gpu_ids 0
): set--gpu_ids -1
to use CPU mode; set --gpu_ids 0,1,2
for multi-GPU mode. You need a large batch size (e.g. --batchSize 32
) to benefit from multiple GPUs.--display_id
> 0, the results and loss plot will appear on a local graphics web server launched by visdom. To do this, you should have visdom
installed and a server running by the command python -m visdom.server
. The default server URL is http://localhost:8097
. display_id
corresponds to the window ID that is displayed on the visdom
server. The visdom
display functionality is turned on by default. To avoid the extra overhead of communicating with visdom
set --display_id 0
. Secondly, the intermediate results are also saved to ./checkpoints/<experiment_name>/web/index.html
. To avoid this, set the --no_html
flag.--resize_or_crop
option. The default option 'resize_and_crop'
resizes the image to be of size (opt.loadSize, opt.loadSize)
and does a random crop of size (opt.fineSize, opt.fineSize)
. 'crop'
skips the resizing step and only performs random cropping. 'scale_width'
resizes the image to have width opt.fineSize
while keeping the aspect ratio. 'scale_width_and_crop'
first resizes the image to have width opt.loadSize
and then does random cropping of size (opt.fineSize, opt.fineSize)
.NOTE: one should not expect ComboGAN to work on just any combination of input and output datasets (e.g. dogs<->houses
). We find it works better if two datasets share similar visual content. For example, landscape painting<->landscape photographs
works much better than portrait painting <-> landscape photographs
.