Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
T5Model
now has a required model_type
parameter ("t5"
or "mt5"
)T5Model
now has a required model_type
parameter ("t5"
or "mt5"
)ClassificationModel
when using mult-GPU trainingSeq2SeqModel
when the model_name
contained backslashes.dataset_class
is specified in Seq2SeqModel
.ClassificationModel
is now fully compatible with Hugging Face.layoutlm
model for NER (see docs)eval_loss
calculationMixed precision (fp16) inference is now supported for evaluation and prediction in the following models:
You can disable fp16 by setting fp16 = False
in the model_args
.
Set the number of GPUs with n_gpu
. in model_args
Currently supported in the following models:
Please note that ONNX support is still experimental.
See docs for details.
See docs for details.