Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
SyntaxDot is a sequence labeler and dependency parser using Transformer networks. SyntaxDot models can be trained from scratch or using pretrained models, such as BERT or XLM-RoBERTa.
In principle, SyntaxDot can be used to perform any sequence labeling task, but so far the focus has been on:
The easiest way to get started with SyntaxDot is to use a pretrained sticker2 model (SyntaxDot is currently compatbile with sticker2 models).
libtorch
SyntaxDot uses techniques from or was inspired by the following papers:
You can report bugs and feature requests in the SyntaxDot issue tracker.
For licensing information, see COPYRIGHT.md.