Open Source Neural Machine Translation and (Large) Language Models in PyTorch
This is the first release candidate for OpenNMT-py major upgdate to 2.0.0!
The major idea behind this release is the -- almost -- complete makeover of the data loading pipeline . A new 'dynamic' paradigm is introduced, allowing to apply on the fly transforms to the data.
This has a few advantages, amongst which:
These transforms can be specific tokenization methods, filters, noising, or any custom transform users may want to implement. Custom transform implementation is quite straightforward thanks to the existing base class and example implementations.
You can check out how to use this new data loading pipeline in the updated docs and examples.
All the readily available transforms are described here.
Given sufficient CPU resources according to GPU computing power, most of the transforms should not slow the training down. (Note: for now, one producer process per GPU is spawned -- meaning you would ideally need 2N CPU threads for N GPUs).
A few features are dropped, at least for now:
Some very old checkpoints with previous fields and vocab structure are also incompatible with this new version.
For any user that still need some of these features, the previous codebase will be retained as legacy
in a separate branch. It will no longer receive extensive development from the core team but PRs may still be accepted.