Simpletransformers Versions Save

Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI

v0.60.0

3 years ago

New Classification Models

Added

  • Added class weights support for Longformer classification
  • Added new classification models (multilabel classification is not supported yet):
    • DeBERTa
    • MPNet
    • SqueezeBert (no sliding window support)

Changed

  • Updated ClassificationModel logic to make it easier to add new models

v0.51.0

3 years ago

MT5, Adafactor optimizer, additional schedulers

Breaking change

  • T5Model now has a required model_type parameter ("t5" or "mt5")

Added

  • Added support for MT5
  • Added support for Adafactor optimizer
  • Added support for various schedulers:
    • get_constant_schedule
    • get_constant_schedule_with_warmup
    • get_linear_schedule_with_warmup
    • get_cosine_schedule_with_warmup
    • get_cosine_with_hard_restarts_schedule_with_warmup
    • get_polynomial_decay_schedule_with_warmup

Changed

  • T5Model now has a required model_type parameter ("t5" or "mt5")

Fixed

  • Fixed issue with class weights not working in ClassificationModel when using mult-GPU training

v0.49.0

3 years ago

Added

Fixed

  • Fixed issue with Seq2SeqModel when the model_name contained backslashes.
  • Fixed issue with saving args when a dataset_class is specified in Seq2SeqModel.

Changed

  • The Electra implementation used with ClassificationModel is now fully compatible with Hugging Face.

v0.48.6

3 years ago

Added

  • Added layoutlm model for NER (see docs)

Fixed

  • Potential fix for inconsistent eval_loss calculation

v0.48.5

3 years ago

Mixed Precision Support for evaluation and prediction

Mixed precision (fp16) inference is now supported for evaluation and prediction in the following models:

  • ClassificationModel
  • ConvAI
  • MultiModalClassificationModel
  • NERModel
  • QuestionAnsweringModel
  • Seq2Seq
  • T5Model

You can disable fp16 by setting fp16 = False in the model_args.

Multi-GPU support for evaluation and prediction

Set the number of GPUs with n_gpu. in model_args Currently supported in the following models:

  • ClassificationModel
  • ConvAI
  • MultiModalClassificationModel
  • NERModel
  • QuestionAnsweringModel
  • Seq2Seq
  • T5Model

Native ONNX support for Classification and NER tasks (Beta)

Please note that ONNX support is still experimental.

See docs for details.

v0.48.0

3 years ago

Added

  • Added dynamic quantization support for all models.
  • Added ConvAI docs to documentation website. @pablonm3

v0.47.0

3 years ago

Added

  • Added support for testing models through a Streamlit app. Use the command `simple-viewer". Currently supports:

See docs for details.

v0.45.2

3 years ago

Added

  • Added dataloader_num_workers to ModelArgs for specifying the number of processes to be used with a Pytorch dataloader.

Changed

  • Bumped required transformers version to 3.0.2

v0.45.0

3 years ago

Added

  • Added Text Representation Generation (RepresentationModel). @pablonm3

v0.44.0

3 years ago

Added

  • Lazy loading support added for QuestionAnsweringModel.