The codebase implements LSTM language model baseline from https://arxiv.org/abs/1602.02410 The code supports running on the machine with multiple GPUs using synchronized gradient updates (which is the main difference with the paper).
The code was tested on a box with 8 Geforce Titan X and LSTM-2048-512 (default configuration) can process up to 100k words per second. The perplexity on the holdout set after 5 epochs is about 48.7 (vs 47.5 in the paper), which can be due to slightly different hyper-parameters. It takes about 16 hours to reach these results on 8 Titan Xs. DGX-1 is about 30% faster on the baseline model.
Assuming the data directory is in: /home/rafal/datasets/lm1b/
, execute:
python single_lm_run.py --datadir /home/rafal/datasets/lm1b/ --logdir <log_dir>
It'll start a tmux session and you can connect to it with: tmux a
. It should contain several windows:
The scripts above executes the following commands, which can be run manually:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python single_lm_train.py --logdir <log_dir> --num_gpus 8 --datadir <data_dir>
CUDA_VISIBLE_DEVICES= python single_lm_train.py --logdir <log_dir> --mode eval_test_ave --datadir <data_dir>
tensorboard --logdir <log_dir> --port 12012
Please note that this assumes the user has 8 GPUs available. Changing the CUDA_VISIBLE_DEVICES mask and --num_gpus flag to something else will work but the training will obviously be slower.
Results can be monitored using TensorBoard, listening on port 12012.
The command accepts and additional argument --hpconfig
which allows to override various hyper-parameters, including:
To run a version of the model with 2 layers and 4096 state size, simply call:
python single_lm_run.py --datadir /home/rafal/datasets/lm1b/ --logdir <log_dir> --hpconfig num_layers=2,state_size=4096
Let me know if you have any questions or comments at [email protected]