Tensorflow Implementation of Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation (NAACL 2019).
This is the official codebase for the following paper, implemented in tensorflow:
Hareesh Bahuleyan, Lili Mou, Hao Zhou, Olga Vechtomova. Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation. NAACL 2019. https://arxiv.org/pdf/1806.08462.pdf
This package contains the code for two tasks
snli
: autoencoder models)dialog
: encoder-decoder models)For the above tasks, the code for the following models have been made available:
vae
) / Variational encoder-decoder (ved
)wae-det
) / Deterministic Wasserstein encoder-decoder (wed-det
)wae-stochastic
) / Stochastic Wasserstein encoder-decoder (wed-stochastic
)The models mentioned in the paper have been evaluated on two datasets:
Additionally, the following dataset is also available to run dialog generation experiments:
The data has been preprocessed and the train-val-test split is provided in the data/
directory of the respective task.
conda
conda create -n nlg python=3.6.1
source activate nlg
cd probabilistic_nlg/
pip install -r requirements.txt
dialog
generation task) :cd snli/
python w2v_generator.py
model_config.py
file. For example,cd wae-det
vim model_config.py # Make necessary edits or specify the hyperparams as command line arguments as below
python train.py --lstm_hidden_units=100 --vocab_size=30000 --latent_dim=100 --batch_size=128 --n_epochs=20 --kernel=IMQ --lambda_val=3.0
models/
directory, the summaries for Tensorboard are stored in summary_logs/
directory. As training progresses, the metrics on the validation set are dumped intobleu_log.txt
and bleu/
directory. The model configuration and outputs generated during training are written to a text file within runs/
Runpredict.py
specifying the desired checkpoint (--ckpt
) to (1) generate sentences given test set inputs; (2) generate sentences by randomly sampling from the latent space; (3) linear interpolation between sentence in the latent space.
By default for vae
and wae-stochastic
, sampling from latent space is carried out within one standard deviation from the mean . Note that predict.py
also outputs the BLEU scores. Hence, when computing BLEU scores, it is ideal to simply use the mean (i.e., no sampling) - for this, set the argument --z_temp=0.0
.
The random_sample_save(checkpoint, num_batches=3)
function call within predict.py
automatically saves sentences generated by latent space sampling into samples/sample.txt
To compute the metrics for evaluating the latent space (AvgLen, UnigramKL, Entropy) as proposed in the paper, run evaluate_latent_space.py
specifying reference sentence set path (i.e., training corpus) and generated sentence samples path (~100k samples is recommended). For example:
python evaluate_latent_space.py -ref='snli/data/snli_sentences_all.txt' -gen='snli/wae-det/samples/sample.txt'
If you found this code useful in your research, please cite:
@inproceedings{probabilisticNLG2019,
title={Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation},
author={Bahuleyan, Hareesh and Mou, Lili and Zhou, Hao and Vechtomova, Olga},
booktitle={Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)},
year={2019}
}