Stefbraun Rnn Benchmarks Save

RNN benchmarks of pytorch, tensorflow and theano

Project README

rnn_benchmarks

Welcome to the rnn_benchmarks repository! We offer:

  • A training speed comparison of different LSTM implementations across deep learning frameworks
  • Common input sizes, network configurations and cost functions from automatic speech recognition
  • Best-practice scripts to learn coding up a network, optimizers, loss functions etc.

Update June 4th 2018

Run the benchmarks

Go to the folder 'main' and execute the 'main.py' script in the corresponding benchmark folder. Before running 'main.py', you need to give the paths to the python environment that contain the corresponding framework. The 'main.py' script creates a 'commands.sh' script that will execute the benchmarks. The measured execution times will be written to 'results/results.csv'. The toy data and default parameters are provided by 'support.py', to make sure every script uses the same hyperparameters.

Open Source Agenda is not affiliated with "Stefbraun Rnn Benchmarks" Project. README Source: stefbraun/rnn_benchmarks
Stars
87
Open Issues
0
Last Commit
5 years ago

Open Source Agenda Badge

Open Source Agenda Rating