Performance Comparison Of GAN On Cifar 10 Save

Performance comparison of ACGAN, BEGAN, CGAN, DRAGAN, EBGAN, GAN, infoGAN, LSGAN, VAE, WGAN, WGAN_GP on cifar-10

Project README

Performance-comparison-of-GAN-on-cifar-10

Performance comparison of ACGAN, BEGAN, CGAN, DRAGAN, EBGAN, GAN, infoGAN, LSGAN, VAE, WGAN, WGAN_GP on cifar-10

Reference:https://github.com/hwalsuklee/tensorflow-generative-model-collections
The original code is for data MNIST, we changed the network structure to apply to cifar-10 and test Inception Score.
The net structures are almost same.
The following results can be reproduced with command:

python main.py --dataset cifar-10 --gan_type <TYPE> --epoch 60 --batch_size 64

#ACGAN

#BEGAN
The result is not well,we don't pay much time to to adjust the super-parameters.

#CGAN

#DRAGAN
Stable, robust, fast convergent.

#EBGAN
The net structure is the same as BEGAN, but collapse.

#GAN

#infoGAN

#LSGAN (Least Squares GAN)

#WGAN
Not as well as paper. The net structure is the same as GAN, but converges too slowly.

#WGAN_GP
There are total 300 epochs for Discriminator, but only 60 epochs for generator (The same times as other models). Converges slowly.

#VAE
Collapsed. We also try to add or subtract bn layers, but it doesn't work.

Open Source Agenda is not affiliated with "Performance Comparison Of GAN On Cifar 10" Project. README Source: AliceAria/Performance-comparison-of-GAN-on-cifar-10

Open Source Agenda Badge

Open Source Agenda Rating