:house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
from farm.infer import QAInferencer
from farm.data_handler.inputs import QAInput, Question
nlp = QAInferencer.load(
"deepset/roberta-base-squad2",
task_type="question_answering",
batch_size=16,
num_processes=0)
input = QAInput(
doc_text="My name is Lucas and I live on Mars.",
questions=Question(text="Who lives on Mars?",
uid="your-id"))
res = nlp.inference_from_objects([input], return_json=False)[0]
# High level attributes for your query
print(res.question)
print(res.context)
print(res.no_answer_gap)
# ...
# Attributes for individual predictions (= answers)
pred = res.prediction[0]
print(pred.answer)
print(pred.answer_type)
print(pred.answer_support)
print(pred.offset_answer_start)
print(pred.offset_answer_end)
# ...
:man_farmer: :woman_farmer: Thanks to all contributors for making FARMer's life better! @PhilipMay, @tstadel, @brandenchan, @tanaysoni, @Timoeller, @tholor, @bogdankostic
Minor release including an important bug fix for Question Answering
Fixing a bug that was introduced in 0.4.4 (#416 and #417) that resulted in returning only a single answer per document in certain situations. This caused particular trouble for open-domain QA settings like in haystack.
Adding multiple optimizations and bug fixes to improve training from scratch, incl.:
This helped to boost training time in our benchmark from 616 hours down to 160 hours See #305 for details
We welcome a new language model to the FARM family that we found to be a really powerful alternative to the existing ones. ELECTRA is trained using a small generator network that replaces tokens with plausible alternatives and a discriminative model that predicts which learns to detect these replaced tokens (see the paper for details: https://arxiv.org/abs/2003.10555). This makes pretraining more efficient and improves down-stream performance for quite many tasks.
You can load it as usual via
LanguageModel.load("google/electra-base-discriminator")
See HF's model hub for more model variants
With QA being our favorite and focussed down-stream task, we are happy to support an additional style of QA in FARM ( #334). In contrast to the popular SQuAD-based models, these NQ models support binary answers, i.e. questions like "Is Berlin the capital of Germany?" can be answered with "Yes" and an additional span that the model used as a "supporting fact" to give this answer.
The implementation leverages the option of prediction heads in FARM by having one QuestionAnsweringHead
that predicts a span (like in SQuAD) and one TextClassificationHead
that predicts what type of answer the model should give (current options: span, yes, no, is_impossible).
Example:
QA_input = [
{
"qas": ["Is Berlin the capital of Germany?"],
"context": "Berlin (/bɜːrˈlɪn/) is the capital and largest city of Germany by both area and population."
}
]
model = Inferencer.load(model_name_or_path="../models/roberta-base-squad2-nq", batch_size=batch_size, gpu=True)
result = model.inference_from_dicts(dicts=QA_input, return_json=False)
print(f"Answer: {result[0].prediction[0].answer}")
>> Answer: yes
See this new example script for more details on training and inference.
Note: This release includes the initial version for NQ, but we are already working on some further simplifications and improvements in #411 .
With inference speed being crucial for many deployments - especially for QA, we introduce a new benchmarking tool in #321. This allows us to easily compare the performance of different frameworks (e.g. ONNX vs. pytorch), parameters (e.g. batch size) and code optimizations of different FARM versions. See the readme for usage details and this spreadsheet for current results.
Modeling
Data handling
TextClassificationProcessor
and RegressionProcessor
#387
TextClassificationProcessor
PR #383
Examples / Docs
Other
farm.conversion
module #365
:man_farmer: :woman_farmer: Thanks to all contributors for making FARMer's life better! @PhilipMay , @stefan-it, @ftesser , @tstadel, @renaud, @skirdey, @brandenchan, @tanaysoni, @Timoeller, @tholor, @bogdankostic
The Inferencer has now a fixed pool of processes instead of creating a new one for every inference call. This accelerates the processing a bit and solves some problems when using it in combination with Frameworks like gunicorn/FastAPI etc (#329)
Old:
...
inferencer.inference_from_dicts(dicts, num_processes=8)
New:
Inferencer(dicts, num_processes=8)
...
You can now also use the Inferencer in a "streaming mode". This is especially useful in production scenarios where the Inferencer is part of a bigger pipeline (e.g. consuming documents from elasticsearch) and you want to get predictions as soon as they are available (#315)
Input: Generator yielding dicts with your text Output: Generator yielding your predictions
dicts = sample_dicts_generator() # it can be a list of dicts or a generator object
results = inferencer.inference_from_dicts(dicts, streaming=True, multiprocessing_chunksize=20)
for prediction in results: # results is a generator object that yields predictions
print(prediction)
While Transformers are conquering many of the current NLP tasks, there are still quite some tasks (e.g. some document classification) where they are a complete overkill. Benchmarking Transformers with "classic" uncontextualized embedding models is a common, good practice and is now possible without switching frameworks. We added basic support for loading in embeddings models like GloVe, Word2vec and FastText and using them as a "LanguageModels" in FARM (#285)
See the example script
We also added a new pooling method to get sentence or document embeddings from these models that can act as a strong baseline for transformer-based approaches (e.g Sentence-BERT). The method is called S3E and was recently introduced by Wang et al in "Efficient Sentence Embedding via Semantic Subspace Analysis" (#286)
See the example script
Modeling
Evaluation & Inference
Other
:man_farmer: :woman_farmer: Thanks to all contributors for making FARMer's life better! @brandenchan, @tanaysoni, @Timoeller, @tholor, @bogdankostic, @gsarti
Allows you to load data lazily from disk and preprocess a batch on-the-fly when needed during training.
stream_data_silo = StreamingDataSilo(processor=processor, batch_size=batch_size)
=> Allows large datasets that don't fit in memory (e.g. for training from scratch) => Training directly starts. No initial time for preprocessing needed.
Microsoft recently added optimizations to the ONNX-runtime and reported substantial speed-ups compared to PyTorch. Since these improvements can be particularly useful for inference-heavy tasks such as QA, we added a way to export your AdaptiveModel
to the ONNX format and load it into the Inferencer
:
model = AdaptiveModel (...)
model.convert_to_onnx(Path("./onnx_model"))
inferencer = Inferencer.load(model_name_or_path=Path("./onnx_model"))
=> See example
=> Speed improvements depend on device and batch size. On a Tesla V100 we measured improvements between 30% - 260% for doing end-to-end QA inference on a large document and we still see more potential for optimizations.
Batch Size | PyTorch | ONNX | ONNX V100 optimizations | Speedup |
---|---|---|---|---|
1 | 27.5 | 12.8 | 10.6 | 2.59 |
2 | 17.5 | 11.5 | 9.1 | 1.92 |
4 | 12.5 | 10.7 | 8.3 | 1.50 |
8 | 10.6 | 10.2 | 8.2 | 1.29 |
16 | 10.5 | 10.1 | 7.8 | 1.38 |
32 | 10.1 | 9.8 | 7.8 | 1.29 |
64 | 9.9 | 9.8 | 7.8 | 1.26 |
128 | 9.9 | 9.8 | 7.7 | 1.28 |
256 | 10.0 | 9.8 | 7.9 | 1.26 |
Extracting embeddings from a model at inference time is now more similar to other inference modes.
Old
model = Inferencer.load(lang_model, task_type="embeddings", gpu=use_gpu, batch_size=batch_size)
result = model.extract_vectors(dicts=basic_texts, extraction_strategy="cls_token", extraction_layer=-1)
New
model = Inferencer.load(lang_model, task_type="embeddings", gpu=use_gpu, batch_size=batch_size,
extraction_strategy="cls_token", extraction_layer=-1)
result = model.inference_from_dicts(dicts=basic_texts, max_processes=1)
=> The preprocessing can now also utilize multiprocessing
=> It's easier to reuse other methods like Inference.inference_from_file()
Added supprt for text pair classification and ranking. Both can be especially helpful in semantic search settings where you want to (re-)rank search results and will be incorporated in our haystack framework soon. Examples:
next_sentence_head
in examples/lm_finetuning.py
. #273
:man_farmer: :woman_farmer: Thanks to all contributors for making FARMer's life better! @brandenchan, @tanaysoni, @Timoeller, @tholor, @bogdankostic, @andra-pumnea, @PhilipMay, @ftesser, @guggio
Open-source is more than just public code. It's a mindset of sharing, being transparent and collaborating across organizations. It's about building on the shoulders of other projects and advancing together the state of technology. That's why we built on the top of the great Transformers library by huggingface and are excited to release today an even deeper compatibility that simplifies the exchange & comparison of models.
1. Convert models from/to transformers
model = AdaptiveModel.convert_from_transformers("deepset/bert-base-cased-squad2", device="cpu", task_type="question_answering")
transformer_model = model.convert_to_transformers()
2. Load models from their new model hub:
LanguageModel.load("TurkuNLP/bert-base-finnish-cased-v1")
Inferencer.load("deepset/bert-base-cased-squad2", task_type="question_answering")
...
Thanks to @BramVanroy and @johann-petrak we got some really hot new features here:
Automatic Mixed Precision (AMP) Training: Speed up your training by ~ 35%! Model params are usually stored with FP32 precision. Some model layers don't need that precision and can be reduced to FP16, which speeds up training and reduces memory footprint. AMP is a smart way of figuring out, for which params we can reduce precision without sacrificing performance (Read more). Test it by installing apex and setting "use_amp" to "O1" in one of the FARM example scripts.
More flexible Optimizers & Schedulers: Choose whatever optimizer you like from PyTorch, apex or Transformers. Take your preferred learning rate schedule from Transformers or PyTorch (Read more)
Cross-validation: Get more reliable eval metrics on small datasets (see example)
Early Stopping: With early stopping, the run stops once a chosen metric is not improving any further and you take the best model up to this point. This helps prevent overfitting on small datasets and reduces training time if your model doesn't improve any further (see example).
Save time if you run similar pipelines (e.g. only experimenting with model params): Store your preprocessed dataset & load it next time from cache:
data_silo = DataSilo(processor=processor, batch_size=batch_size, caching=True)
Start & stop training by saving checkpoints of the trainer:
trainer = Trainer.create_or_load_checkpoint(
...
checkpoint_on_sigterm=True,
checkpoint_every=200,
checkpoint_root_dir=Path(“/opt/ml/checkpoints/training”),
resume_from_checkpoint=“latest”)
The checkpoints include the state of everything that matters (model, optimizer, lr_schedule ...) to resume training. This is particularly useful, if your training crashes (e.g. because you are using spot cloud instances).
We are currently working a lot on simplifying large scale training and deployment. As a first step, we are adding support for training on AWS SageMaker. The interesting part here is the option to use Spot Instances and save about 70% of costs compared to regular instances. This is particularly relevant for training models from scratch, which we introduce in a basic version in this release and will improve over the next weeks. See this tutorial to get started with using SageMaker for training on down-stream tasks.
FARM now also runs on Windows. This implies one breaking change:
We now use pathlib and therefore expect all directory paths to be of type Path
instead of str
#172
:man_farmer: :woman_farmer: Thanks to all contributors for making FARMer's life better! @brandenchan, @tanaysoni, @Timoeller, @tholor, @maknotavailable, @johann-petrak, @BramVanroy
We believe QA is one of the most exciting tasks for transfer learning. However, the complexity of the task lets pipelines easily become messy, complicated and slow. This is unacceptable for production settings and creates a high barrier for developers to modify or improve them.
We put substantial effort in re-designing QA in FARM with two goals in mind: making it the simplest & fastest pipeline out there. Results:
See this blog post for more details and to learn about the key steps in a QA pipeline.
Good news for our corporate users: Many of you approached us that the automated downloads of datasets / models caused problem in environments with proxy servers. You can now pass the proxy details to Processor and LanguageModel in the format used by the requests library
Example:
proxies = {"https": "http://user:[email protected]:8000"}
language_model = LanguageModel.load(pretrained_model_name_or_path = "bert-base-cased",
language = "english",
proxies=proxies
)
...
processor = BertStyleLMProcessor(data_dir="data/lm_finetune_nips",
tokenizer=tokenizer,
max_seq_len=128,
max_docs=25,
next_sent_pred=True,
proxies = proxies,
)
Thanks to all contributors for making FARMer's life better! @johann-petrak, @brandenchan, @tanaysoni, @Timoeller, @tholor, @cregouby
When asking questions on long documents, the underlying Language Model needs to cut the document in multiple passages and answer the question on each of them. The output needs to be aggregated.
The QA Inferencer
Welcome RoBERTa and XLNet on the FARM :tada:! We did some intense refactoring in FARM to make it easier to add more language models. However, we will only add models where we see some decent advantages. One of the next models to follow will very likely be ALBERT ...
For now, we support Roberta/XLNet on (Multilabel) Textclassification, Text Regression and NER. QA will follow soon.
:warning: Breaking Change - Loading of Language models has changed:
Bert.load("bert-base-cased") -> LanguageModel.load("bert-base-cased")
Pros:
Cons:
:warning: Breaking Change - Loading of tokenizers has changed:
BertTokenizer.from_pretrained("bert-base-cased") -> Tokenizer.load("bert-base-cased")
:warning: Breaking Change - never_split_chars: is no longer supported as an argument for the Tokenizer
Data preprocessing via the Processor is now fast while maintaining a low memory footprint. Before, the parallelization via multiprocessing was causing serious memory issues on larger data sets (e.g. for Language Model fine-tuning). Now, we are running a small chunk through the whole processor (-> Samples -> Featurization -> Dataset ...). The multiprocessing is handled by the DataSilo now which simplifies implementation.
With this new approach we can still easily inspect & debug all important transformations for a chunk, but only keep the resulting dataset in memory once a process has finished with a chunk.
We support now also multilabel classification. Prepare your data by simply setting multilabel=true
in the TextClassificationProcessor
and use the new MultiLabelTextClassificationHead
for your model.
=> See an example here
To further simplify multi-task learning we added the concept of "tasks". With this you can now use one TextClassificationProcessor
to preprocess data for multiple tasks (e.g. using two columns in your CSV for classification).
Example:
processor = TextClassificationProcessor(...)
news_categories = ["Sports", "Tech", "Politics", "Business", "Society"]
publisher = ["cnn", "nytimes","wsj"]
processor.add_task(name="category", label_list=news_categories, metric="acc", label_column_name="category_label")
processor.add_task(name="publisher", label_list=publisher, metric="acc", label_column_name="publisher_label")
PredictionHead
by supplying the task name at initialization:category_head = MultiLabelTextClassificationHead(layer_dims=[768,5)], task_name="action_type")
publisher_head = MultiLabelTextClassificationHead(layer_dims=[768, 3], task_name="parts")
We are happy to see how huggingface's repository is growing and how they made another major step with the new 2.0 release. Since their collection of language models is awesome, we will continue building upon their language models and tokenizers. However, we will keep following a different philosophy for all other components (dataprocessing, training, inference, deployment ...) to improve usability, allow multitask learning and simplify usage in the industry.
Thanks to all contributors: @tripl3a, @busyxin, @AhmedIdr, @jinnerbichler, @Timoeller, @tanaysoni, @brandenchan , @tholor
👩🌾 Happy FARMing!