PyTorch StudioGAN Versions Save

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

v.0.4.0

1 year ago
  • We checked the reproducibility of implemented GANs.
  • We provide Baby, Papa, and Grandpa ImageNet datasets where images are processed using the anti-aliasing and high-quality resizer.
  • StudioGAN provides a dedicatedly established Benchmark on standard datasets (CIFAR10, ImageNet, AFHQv2, and FFHQ).
  • StudioGAN supports InceptionV3, ResNet50, SwAV, DINO, and Swin Transformer backbones for GAN evaluation.

v.0.3.0

2 years ago
  • Add SOTA GANs: LGAN, TACGAN, StyleGAN2, MDGAN, MHGAN, ADCGAN, ReACGAN (our new paper).
  • Add five types of differentiable augmentation: CR, DiffAugment, ADA, SimCLR, BYOL.
  • Implement useful regularizations: Top-K training, Feature Matching, R1-Regularization, MaxGP
  • Add Improved Precision & Recall, Density & Coverage, iFID, and CAS for reliable evaluation.
  • Support Inception_V3 and SwAV backbones for GAN evaluation.
  • Verify the reproducibility of StyleGAN2 and BigGAN.
  • Fix bugs in FreezeD, DDP training, Mixed Precision training, and ADA.
  • Support Discriminator Driven Latent Sampling, Semantic Factorization for BigGAN evaluation.
  • Support Wandb logging instead of Tensorboard.

v0.2.0

3 years ago

Second release of StudioGAN with following features

  • Fix minor bugs (slow convergence of training GAN + ADA models, tracking bn statistics during evaluation, etc.)
  • Add multi-node DistributedDataParallel (DDP) training.
  • Comprehensive benchmarks on CIFAR10, Tiny_ImageNet, and ImageNet datasets.
  • Provide pre-trained models and log files for the future research.
  • Add LARS optimizer and TSNE analysis.

v0.1.0

3 years ago

First StudioGAN release with following features

  • Extensive GAN implementations for Pytorch: From DCGAN to ADAGAN
  • Comprehensive benchmark of GANs using CIFAR10 dataset
  • Better performance and lower memory consumption than original implementations
  • Providing pre-trained models that are fully compatible with up-to-date PyTorch environment
  • Support Multi-GPU(both DP and DDP), Mixed precision, Synchronized Batch Normalization, and Tensorboard Visualization