Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch
Minimal Seq2Seq model with attention for neural machine translation in PyTorch.
This implementation focuses on the following features:
This implementation relies on torchtext to minimize dataset management and preprocessing parts.
download tokenizers by doing so:
python -m spacy download de
python -m spacy download en
Based on the following implementations