Implementation of Sequence Generative Adversarial Nets with Policy Gradient in PyTorch
An implementation of SeqGAN (Paper: SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient) in PyTorch. The code performs the experiment on synthetic data as described in the paper.
$ python main.py
Please refer to main.py
for supported arguments. You can also change model parameters there.
Learning curve of generator obtained after MLE training for 120 steps (1 epoch per round) followed by adversarial training for 150 rounds (1 epoch per round):
Learning curve of discriminator obtained after MLE training for 50 steps (3 epochs per step) followed by adversarial training for 150 rounds (9 epoch per round):
This code is based on Zhao Zijian's SeqGAN-PyTorch, Surag Nair's SeqGAN and Lantao Yu's original implementation in Tensorflow. Many thanks to Zhao Zijian, Surag Nair and Lantao Yu!