Minimalist NMT for educational purposes
Breaking changes:
Six >= 1.12
You can now retrieve the n-best outputs during inference (rather than just the one best translation) and track the latest checkpoint (for continuing training). We also added a colab for training a small translation model on the Tatoeba task. Now operating on Torch v1.8.0 and using deprecated Torchtext dataset implementations from v0.9.
Additions:
Stable recurrent and Transformer models. Minor changes and refactoring might happen before v1.0.