Progressive Growing Torch Save

Torch implementation of "PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION"

Project README

Progressive Growing of GANs for Improved Quality, Stability, and Variation


[NOTE] This project was not goint well, so I made PyTorch implementation here. :fire: [pggan-pytorch]


Torch implementation of PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION
YOUR CONTRIBUTION IS INVALUABLE FOR THIS PROJECT :)

image

NEED HELP

[ ] (1) Implementing Pixel-wise normalization layer
[ ] (2) Implementing pre-layer normalization (for equalized learning rate)  
(I have tried both, but failed to converge. Anyone can help implementing those two custom layers?)

Prerequisites

How to use?

[step 1.] Prepare dataset
CelebA-HQ dataset is not available yet, so I used 100,000 generated PNGs of CelebA-HQ released by the author.
The quality of the generated image was good enough for training and verifying the preformance of the code.
If the CelebA-HQ dataset is releasted in then near future, I will update the experimental result.
[download]

  • CAUTION: loading 1024 x 1024 image and resizing every forward process makes training slow. I recommend you to use normal CelebA dataset until the output resolution converges to 256x256.
---------------------------------------------
The training data folder should look like : 
<train_data_root>
                |--classA
                        |--image1A
                        |--image2A ...
                |--classB
                        |--image1B
                        |--image2B ...
---------------------------------------------

[step 2.] Run training

  • edit script/opts.lua to change training parameter. (don't forget to change path to training images)
  • run and enjoy! (Multi-threaded dataloading is supported.)
     $ python run.py

[step 3.] Visualization

  • to start display server:
     $ th server.lua
  • to check images during training procudure:
     $ <server_ip>:<port> at your browser

Experimental results

image

Transition experiment: (having trouble with transition from 8x8 -> 16x16 yet.)

What does the printed log mean?

(example)
[E:0][T:91][ 91872/202599]    errD(real): 0.2820 | errD(fake): 0.1557 | errG: 0.3838    [Res:   4][Trn(G):0.0%][Trn(D):0.0%][Elp(hr):0.2008]
  • E: epoch / T: ticks (1tick = 1000imgs) / errD,errG: loss of discrminator and generator
  • Res: current resolution of output
  • Trn: transition progress (if 100%, in training phase. if less than 100%, in transition phase using fade-in layer.)
    • first Trn : Transition of fade-in layer in generator.
    • second Trn : Transition of fade-in layer in discriminator.
  • Elp(hr): Elapsed Time (Hour)

To-Do List (will be implemented soon)

  • Equalized learning rate (weight normalization)
  • Support WGAN-GP loss

Compatability

  • cuda v8.0
  • Tesla P40 (you may need more than 12GB Memory. If not, please adjust the batch_table in pggan.lua)
  • python 2.7 / Torch7

Acknowledgement

Author

MinchulShin, @nashory
image

Open Source Agenda is not affiliated with "Progressive Growing Torch" Project. README Source: nashory/progressive-growing-torch

Open Source Agenda Badge

Open Source Agenda Rating