A PyTorch implementation of Listen, Attend and Spell (LAS), an End-to-End ASR framework.
A PyTorch implementation of Listen, Attend and Spell (LAS) [1], an end-to-end automatic speech recognition framework, which directly converts acoustic features to character sequence using only one nueral network.
pip install -r requirements.txt
cd tools; make KALDI=/path/to/kaldi
egs/aishell/run.sh
, download aishell dataset for free.$ cd egs/aishell
and modify aishell data path to your path in run.sh
.$ bash run.sh
, that's all!You can change hyper-parameter by $ bash run.sh --parameter_name parameter_value
, egs, $ bash run.sh --stage 3
. See parameter name in egs/aishell/run.sh
before . utils/parse_options.sh
.
$ cd egs/aishell/
$ . ./path.sh
Train
$ train.py -h
Decode
$ recognize.py -h
Workflow of egs/aishell/run.sh
:
If you want to visualize your loss, you can use visdom
to do that:
$ visdom
.$ bash run.sh --visdom 1 --visdom_id "<any-string>"
or $ train.py ... --visdom 1 --vidsdom_id "<any-string>"
.<your-remote-server-ip>:8097
, egs, 127.0.0.1:8097
.<any-string>
in Environment
to see your loss.Model | CER | Config |
---|---|---|
LSTMP | 9.85 | 4x(1024-512) |
Listen, Attend and Spell | 13.2 | See egs/aishell/run.sh |
[1] W. Chan, N. Jaitly, Q. Le, and O. Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in ICASSP 2016. (https://arxiv.org/abs/1508.01211v2)