A pytorch implementation of pix2pix + BEGAN (Boundary Equilibrium Generative Adversarial Networks)
/path/to/facades
CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --exp /path/to/a/directory/for/checkpoints
CUDA_VISIBLE_DEVICES=x python main_pix2pixBEGAN.py --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --exp /path/to/a/directory/for/checkpoints
We found out both L_D and L_G are balanced consistently(equilibrium parameter, gamma=0.7) and converged, even thought network D and G are different in terms of model capacity and detailed layer specification.
M_global
As the author said, M_global is a good indicator for monitoring convergence.
Parsing log: train-log file will be saved in the driectory, you specified, named as train.log
L_D and L_G \w GAN
CUDA_VISIBLE_DEVICES=x python compare.py --netG_GAN /path/to/netG.pth --netG_BEGAN /path/to/netG.pth --exp /path/to/a/dir/for/saving --tstDataroot /path/to/facades/test/
CUDA_VISIBLE_DEVICES=x python interpolateInput.py --tstDataroot ~/path/to/your/facades/test/ --interval 14 --exp /path/to/resulting/dir --tstBatchSize 4 --netG /path/to/your/netG_epoch_xxx.pth