Flair Versions Save

A very simple framework for state-of-the-art Natural Language Processing (NLP)

v0.13.1

4 months ago

This releases adds some bugfixes on top of the 0.13.0 Release, and adds a new dataset.

Bug fixes

Enhancements

New Datasets

New Contributors

Full Changelog: https://github.com/flairNLP/flair/compare/v0.13.0...v0.13.1

v0.13.0

6 months ago

This release adds several major new features such as (1) faster and more memory-efficient transformer training, (2) a new plugin system for custom logging and training, (3) new API docs for better documentation - still in beta, and (4) various new models, datasets, bug fixes and enhancements. This release also increases the minimum requirement to Python 3.8!

New Feature: Faster and more memory-efficient transformer training

This release integrates @helpmefindaname's transformer-smaller-training-vocab into the ModelTrainer. This temporarily reduces a transformer's vocabulary to only the tokens in the training dataset, and after training restores the full vocabulary. Depending on the dataset, this may effect huge savings in GPU memory and tuning speeds.

To use this feature, simply add the flag reduce_transformer_vocab=True to the fine_tune method. For example, to fine-tune a distilbert model on TREC_6, run this code (step 7 has the flag to reduce the vocabulary):

# 1. get the corpus
corpus: Corpus = TREC_6()

# 2. what label do we want to predict?
label_type = "question_class"

# 3. create the label dictionary
label_dict = corpus.make_label_dictionary(label_type=label_type)

# 4. initialize transformer document embeddings (many models are available)
document_embeddings = TransformerDocumentEmbeddings("distilbert-base-uncased", fine_tune=True)

# 5. create the text classifier
classifier = TextClassifier(document_embeddings, label_dictionary=label_dict, label_type=label_type)

# 6. initialize trainer
trainer = ModelTrainer(classifier, corpus)

# 7. fine-tune the model, but **reduce the vocabulary** for faster training
trainer.fine_tune(
    "resources/taggers/question-classification-with-transformer",
    reduce_transformer_vocab=True,  # set this to False for slow version
)

Involved PR: add reduce transformer vocab plugin by @helpmefindaname in https://github.com/flairNLP/flair/pull/3217

New Feature: Trainer Plugins

A new "Plugin" system was added to the ModelTrainer, allowing far greater options to customize the training cycle (and slimming down the code of the ModelTrainer somewhat). For instance, it is now possible to customize logging to a far greater degree and integrate third-party logging tools.

For instance, if you want to integrate ClearML logging into the above script, simply instantiate the plugin and attach it to the trainer:

[...]

# 6. initialize trainer
trainer = ModelTrainer(classifier, corpus)

# NEW: instantiate a special logger and attach it to the trainer before the training run
ClearmlLoggerPlugin(clearml.Task.init(project_name="test", task_name="test")).attach_to(trainer)

# 7. fine-tune the model, but **reduce the vocabulary** for faster training
trainer.fine_tune(
    "resources/taggers/question-classification-with-transformer",
    reduce_transformer_vocab=True,  # set this to False for slow version
)

Involved PRs:

API Docs and other documentation

We are working towards improving our documentation. A first step was the release of our tutorial page. Now, we are adding (in beta) online API docs to make navigating the code and options offered by Flair easier. To enable it, we changed all docstrings to Google docstrings. However, this process is still ongoing, so expect the API docs to improve in coming versions of Flair.

You can find the API docs here: https://flairnlp.github.io/flair/master/api/index.html

Involved PRs:

Model Refactorings

In an effort to unify class names, we now offer models that inherit from DefaultClassifier for each label type we predict, i.e.:

  • TokenClassifier for predicting Token labels
  • TextPairClassifier for predicting TextPair labels
  • RelationClassifier for predicting Relation labels
  • SpanClassifier for predicting Span labels
  • TextClassifier for predicting Sentence labels

An advantage of such a structure is that most functionality (such as new decoders) needs to only be implemented once in DefaultClassifier and then is immediately usable for all model classes.

To enable this, we renamed and extended WordTagger as TokenClassifier, and renamed Entity Linker to SpanClassifier. This is not a breaking change yet, as the old names are still available. But in the future, WordTagger and Entity Linker will be removed.

Involved PRs:

New Models

We also add two new model classes: (1) a TextPairRegressor for regression tasks on pairs of sentences (such as STS-B), and (2) an experimental Label Encoder method for few-shot classification.

Involved PRs:

New Datasets

Build Process

Bug Fixes

Enhancements

Breaking Changes

  • Removing the following legacy embeddings, as their support was droppend long ago:
    • XLNetEmbeddings
    • XLMEmbeddings
    • OpenAIGPTEmbeddings
    • OpenAIGPT2Embeddings
    • RoBERTaEmbeddings
    • CamembertEmbeddings
    • XLMRobertaEmbeddings
    • BertEmbeddings you can use TransformerWordEmbeddings or TransformerDocumentEmbeddings instead.
  • Removing ELMoTransformerEmbeddings as allennlp is no longer maintained.
  • Removal of the flair.hyperparameter module: We recommend using the hyperparameter optimzier of your choice as external module, for example see here how to fine tune flair models with the hugginface AutoTrain SpaceRunner
  • Drop of the trainer.resume(...) functionality. Similary to the flair.hyperparameter module, this functionality was dropped due to the trainer rework.
  • Changes to the trainer.train(...) and trainer.fine_tune(...) parameters:
    • monitor_train: bool was replaced by monitor_train_sample: float: this allows you to specify the percentage of training data points used for monitoring (setting monitor_train_sample=1.0 is equivalent to the previous behaivour of monitor_train=True.
    • eval_on_train_fraction is removed in favour of monitor_train_sample see monitor_train.
    • eval_on_train_shuffle is removed.
    • anneal_with_prestarts and batch_growth_annealing have been removed.
    • num_workers has been removed, now there is always used a single worker for data loading, as it is the fastest for the inmemory datasets.
    • checkpoint has been removed as parameter. You can use the CheckpointPlugin for the same behaviour.
    • cycle_momentum has been removed, as schedulers have been moved to Plugins.
    • param_selection_mode has been removed, similar to the hyper parameter optimization.
    • optimizer_state_dict and scheduler_state_dict were removed as part of the resume functionality.
    • anneal_against_dev_loss has been dropped, as the annealing goeas always against the metric specified by main_evaluation_metric
    • use_swa has been removed
    • use_tensorboard, tensorboard_comment tensorboard_log_dir & metrics_for_tensorboard are removed in favour of the TensorboardLogger plugin.
    • amp_opt_level is removed, as we moved to the torch integration.
    • WordTagger has been deprecated as it was renamed to TokenClassifier
    • EntityLinker has been deprecated as it was renamed to SpanClassifier

New Contributors

Full Changelog: https://github.com/flairNLP/flair/compare/v0.12.2...v0.13.0

v0.12.2

1 year ago

Another follow-up release to 0.12 that fixes a several bugs and adds a new multilingual frame tagger. Further, our new documentation website at https://flairnlp.github.io/docs/intro is now online!

New frame tagging model #3172

Adds a new model for detecting PropBank frame. The model is trained using the "FLERT" approach, so it is much stronger than the previous 'frame' model. We also added some training data from the universal proposition bank to improve multilingual frame detection.

Use it like this:

# load the large frame model
model = Classifier.load('frame-large')

# English sentence with the verb "return" in two different senses
sentence = Sentence("Dirk returned to Berlin to return his hat.")
model.predict(sentence)
print(sentence)

# German sentence with the verb "trug" in two different senses
sentence_de = Sentence("Dirk trug einen Koffer und trug einen Hut.")
model.predict(sentence_de)
print(sentence_de) 

This should print:

Sentence[9]: "Dirk returned to Berlin to return his hat." → ["returned"/return.01, "return"/return.02]

Sentence[9]: "Dirk trug einen Koffer und trug einen Hut." → ["trug"/carry.01, "trug"/wear.01]

The printout tells us that the verbs in both sentences are correctly disambiguated.

Documentation

Enhancements / New Features

  • more consistent behavior of context dropout and FLERT token #3168
  • settting device through environment variable #3148 (thanks @HallerPatrick)
  • modify Sentence.to_original_text() to take into account Sentence.start_position for whitespace calculation #3150 (thanks @mauryaland)
  • gather dev and test labels if the dataset is available #3162 (thanks @helpmefindaname)

Bug fixes

  • fix bugs caused by wrong data point equality and caching #3157
  • fix transformer smaller training vocab #3155 (thanks @helpmefindaname)
  • update scispacy version #3144 (thanks @mariosaenger)
  • unpin huggingface-hub #3149 (thanks @marctorsoc)

v0.12.1

1 year ago

This is a quick follow-up release to 0.12 that fixes a few small bugs and includes an improved version of our Zelda entity linker.

New Entity Linking model

We include a new version of our Zelda entity linker with improved predictions. Try it as follows:

from flair.nn import Classifier
from flair.data import Sentence

# load the model
tagger = Classifier.load('linker')

# make a sentence
sentence = Sentence('Kirk and Spock met on the Enterprise.')

# predict NER tags
tagger.predict(sentence)

# print predicted entities
for label in sentence.get_labels():
    print(label)

This should print:

Span[0:1]: "Kirk" → James_T._Kirk (0.9969)
Span[2:3]: "Spock" → Spock (0.9971)
Span[6:7]: "Enterprise" → USS_Enterprise_(NCC-1701-D) (0.975)

Indicating correctly that the span "Kirk" points to "James_T._Kirk". As the prediction for the string "Enterprise" shows, the model is still beta and will be further improved with future releases.

Bug fixes

  • make transformer training vocab optional #3132
  • change token.get_tag() to token.get_label() #3135
  • update required version of transformers library #3138
  • update HunFlair tutorial to Flair 0.12 #3137

v0.12

1 year ago

Release 0.12 is out! This release greatly simplifies model usage for our users, includes our first entity linking model, adds support for the Ukrainian language, adds easy-to-use multitask learning, and many more features, improvements and bug fixes!

New Features

Simplify Flair model usage #3067

You can now load any Flair model through its parent class. Since most models inherit from Classifier, you can load and run multiple different models with exactly the same code. So, to run three different taggers for sentiment, entities and frames, do:

from flair.data import Sentence
from flair.nn import Classifier

# load three taggers to tag entities, frames and sentiment
tagger_1 = Classifier.load('ner')
tagger_2 = Classifier.load('frame')
tagger_3 = Classifier.load('sentiment')

# example sentence
sentence = Sentence('Dirk celebrated in Essen')

# predict with all three models
tagger_1.predict(sentence)
tagger_2.predict(sentence)
tagger_3.predict(sentence)

# print all predictions
for label in sentence.get_labels():
    print(label)

With this change, users no longer need to know which model classes implement which model. For more advanced users who do know this, the regular way for loading a model still works:

sentiment_tagger = TextClassifier.load('sentiment')

Entity Linking (BETA)

As of Flair 0.12 we ship an experimental entity linker trained on the Zelda dataset. The linker not only tags entities, but also attempts to link each entity to the corresponding Wikipedia URL if one exists.

To illustrate, let's use a short example text with two mentions of "Barcelona". The first refers to the football club "FC Barcelona", the second to the city "Barcelona".

from flair.nn import Classifier
from flair.data import Sentence

# load the model
tagger = Classifier.load('linker')

# make a sentence
sentence = Sentence('Bayern played against Barcelona. The match took place in Barcelona.')

# predict NER tags
tagger.predict(sentence)

# print sentence with predicted tags
print(sentence)

This should print:

Sentence[12]: "Bayern played against Barcelona. The match took place in Barcelona." → ["Bayern"/FC_Bayern_Munich, "Barcelona"/FC_Barcelona, "Barcelona"/Barcelona]

As we can see, the linker can resolve what the two mentions of "Barcelona" refer to:

  • the first mention "Barcelona" is linked to "FC_Barcelona"
  • the second mention "Barcelona" is linked to "Barcelona"

Additionally, the mention "Bayern" is linked to "FC_Bayern_Munich", telling us that here the football club is meant.

Entity linking support includes:

  • Support for the ZELDA candidate lists #3108 #3111
  • Support for the ZELDA training and evaluation dataset #3088

Support for Ukrainian language #3026

This version adds support for Ukrainian taggers, embeddings and datasets. For instance, to do NER and POS tagging of a Ukrainian sentence, do:

# Load Ukrainian NER and POS taggers
from flair.models import SequenceTagger

ner_tagger = SequenceTagger.load('ner-ukrainian')
pos_tagger = SequenceTagger.load('pos-ukrainian')

# Tag a sentence
from flair.data import Sentence
sentence = Sentence("Сьогодні в Знам’янці проживають нащадки поета — родина Шкоди.")

ner_tagger.predict(sentence)
pos_tagger.predict(sentence)

print(sentence)
# ”Сьогодні в Знам’янці проживають нащадки поета — родина Шкоди." → 
# [“Сьогодні"/ADV, "в"/ADP, "Знам’янці"/LOC, "Знам’янці"/PROPN, "проживають”/VERB, "нащадки"/NOUN, "поета"/NOUN, "—"/PUNCT, "родина"/NOUN, "Шкоди”/PERS, "Шкоди"/PROPN, "."/PUNCT]

Multitask Learning (#2910 #3085 #3101)

We add support for multitask learning in Flair (closes #2508 and closes #1260) with hopefully a simple syntax to define multiple tasks that share parts of the model.

The most common part to share is the transformer, which you might want to fine-tune across several tasks. Instantiate a transformer embedding and pass it to two separate models that you instantiate as before:

# --- Embeddings that are shared by both models --- #
shared_embedding = TransformerDocumentEmbeddings("distilbert-base-uncased", fine_tune=True)

# --- Task 1: Sentiment Analysis (5-class) --- #
corpus_1 = SENTEVAL_SST_GRANULAR()

model_1 = TextClassifier(shared_embedding,
                         label_dictionary=corpus_1.make_label_dictionary("class"),
                         label_type="class")

# -- Task 2: Binary Sentiment Analysis on Customer Reviews -- #
corpus_2 = SENTEVAL_CR()

model_2 = TextClassifier(shared_embedding,
                         label_dictionary=corpus_2.make_label_dictionary("sentiment"),
                         label_type="sentiment",
                         )

# -- Define mapping (which tagger should train on which model) -- #
multitask_model, multicorpus = make_multitask_model_and_corpus(
    [
        (model_1, corpus_1),
        (model_2, corpus_2),
    ]
)

# -- Create model trainer and train -- #
trainer = ModelTrainer(multitask_model, multicorpus)
trainer.fine_tune(f"resources/taggers/multitask_test")

The mapping part here defines which tagger should be trained on which corpus. By calling make_multitask_model_and_corpus with a mapping, you get a corpus and model object that you can train as before.

Explicit context boundaries in Transformer embeddings #3073 #3078

We improve our FLERT model by now explicitly marking up context boundaries using a new [FLERT] special token in our transformer embeddings. Our experiments show that the context marker leads to improved NER results:

Transformer Context-Marker CoNLL-03 Test F1
bert-base-uncased none 91.52 +- 0.16
[SEP] 91.38 +- 0.18
[FLERT] 91.56 +- 0.17
xlm-roberta-large none 93.73 +- 0.2
[SEP] 93.76 +- 0.13
[FLERT] 93.92 +- 0.14

In the table, none is the approach used in previous Flair versions. [SEP] means using the standard separator symbol as context delimiter. [FLERT] means using a new dedicated special token.

As [FLERT] performs best in our experiments, the [FLERT] context marker is now activated by default.

More details: Assume the current sentence is Peter Blackburn and the previous sentence ends with to boycott British lamb ., while the next sentence starts with BRUSSELS 1996-08-22 The European Commission.

In this case,

  1. if use_context_separator=False, the embedding is produced from this string: to boycott British lamb . Peter Blackburn BRUSSELS 1996-08-22 The European Commission
  2. if use_context_separator=True, the embedding is produced from this string to boycott British lamb . [FLERT] Peter Blackburn [FLERT] BRUSSELS 1996-08-22 The European Commission

Integrate transformer-smaller-training-vocab #3066

We integrate the transformer-smaller-training-vocab library into the ModelTrainer. With it, you can reduce the size of transformer models when training and evaluating models on specific datasets. This leads to faster training times and a smaller memory footprint. Documentation on this new feature will be added soon!

Masked Relation Classifier #2748 #2993 with various Encoding Strategies #3023 (BETA)

We now include BETA support a new type of relation extraction model that leads to much higher accuracies than our vanilla relation extraction, but increases computational costs. Documentation for this will be added as we iterate on the model.

ONNX compatible models #2640 #2643 #3041 #3075

This release continues the journey on making our models more ONNX compatible.

Other features

  • Add push to Hub functionalities #2897
  • Add layoutlm layoutxlm support and the the SROIE dataset #2980
  • Convenience method for learning rate factor #2888 #2893

New Datasets

  • Add fewnerd corpus #3103
  • Add support for NERMuD 2023 Dataset #3087
  • Adds ZELDA Entity Linking dataset #3088
  • Added Ukrainian NER and UD datasets #3069
  • Add support MasakhaNER v2 dataset #3013
  • Add support for MultiCoNerV2 #3006
  • Add support for new ICDAR Europeana NER Dataset #2911
  • datasets: add support for HIPE-2022 #2735 #2827 #2805

Major refactorings

  • Unify loss reduction by making sure that all losses are summed over all points, instead of averaged #2933 #2910
  • Python 3.7 #2769
  • Flatten DefaultClassifier interface #2978
  • Restructure Tokenizer and Splitter modules #3002
  • Refactor Token and Sentence Positional Properties #3001
  • Seralization of embeddings #3011

Various Improvements

Enhancements

  • add functionality for using proxies #3082
  • add option not to shuffle the first epoch #3076
  • improved Tars Context #3063
  • release optimizer memory and fix legacy tokenization #3043
  • add time elapsed to training printout #2983
  • separate between token-lengths and sub-token lengths #2990
  • small speed optimizations #2975
  • change output of .text to original string #2974
  • remove BAD_EPOCHS printout for most schedulers #2970
  • warn if resuming with too low max_epochs & ' additional_epochs' parameter #2895
  • embeddings: add support for T5 encoder models #2896
  • add py.typed file for PEP-561 compatibility #2858
  • tars classifier always predict something on single label #2838
  • make add_unk optional and don't use it for ner #2839
  • add deprecation warning for SentenceDataset rename #2819
  • more precise type hint for eval_on_train_fraction #2811
  • better handling for consecutive whitespaces in Sentence #2721(already in flair 0.11.3)
  • remove unnecessary more-itertools pin #2730 (already in flair 0.11.3)
  • add exclude_labels parameter to trainer.train #2724 (already in flair 0.11.3)
  • add option to force token-level predictions in SequenceTagger #2750 (already in flair 0.11.3)

Build

  • unified test classes, to ensure that all models & embeddings have tested the basic functionality #2981
  • add missing dependency pre-commit to requirements-dev.txt #3093
  • fix pre-commit bug by upgrading to isort 5.11.5 #3106 #3107
  • update pytest and flake8 versions #2741
  • pytest flake precommit update #2820
  • pin flake8 to v4 #2892
  • specify test paths #2932
  • pin versions for unit tests #2994
  • unit tests: Set a seed so test_train_load_use_classifier doesn't randomly fail #2834
  • replace issue templates with issue forms #3051
  • github actions cache #2753 (already in flair 0.11.3)

Documentation

  • Add Missing Import to Tutorial 5 #2902
  • Documentation pointers #2927
  • readme: fix BibTeX for FLERT paper #2806 #2821
  • docs: mention HIPE-2022 in corpus tutorial #2807

Code improvements

  • add return types to Model and Classifier #3121
  • removed undefined names #3054 #3056
  • add docstrings missing for ModelTrainer.train() parameters #2961
  • remove "tag_to_bioes" (Sequence) Corpus parameter, as it is not used #2812
  • update hf-hub version #2837
  • use transformers sentencepiece requirement #2835
  • replace deprecated logging.warn with logging.warning #2829
  • various mypy issues #2822 #2845 #2905
  • removed some model classes that were very beta: the DependencyParser, the DistancePredictor and the SimilarityLearner. #2910
  • remove legacy TransformerXLEmbeddings class #2768 (already in flair 0.11.3)

Bug fixes

  • fix train error missing dev split #3115
  • fix Avg Pooling in the Entity Linker #3123
  • call super().__setstate__() in Embeddings #3057
  • remove konoha from requirements.txt #3060
  • fix label alignment if the sentence contains invalid tokens #3052
  • change indexing in TARSTagger predict #3058
  • fix training sample count in UD English #3044
  • fix comment parsing for conllu datasets #3020
  • HunFlair: Fix loading of datasets #3030 #3029
  • persist needs_manual_ocr #3012
  • save initial hidden states in sequence tagger #3010
  • do not save Path objects to model cards #2998
  • make JsonlCorpus create span labels #2863
  • JsonlDataset: Fix code that claims to set "O" labels to actually set them #2817
  • relationClassifier fix #2986
  • fix problem in loading TARSClassifier #2987
  • add missing tab for tensorboard #2922
  • fast tokenizer reload fix pt.2: Bloom model #2904
  • fix transformer embeddings for sentence with trailing whitespace #2891
  • added label_name parameter to render_ner_html #2850
  • allow BIO evaluation on sequence tagger #2787
  • refactorings for initialization from state dict #2846
  • save and load "tag_format" for sequence tagger model #2840
  • do not remove other labels of sentence for set_label on Token and Span #2831
  • fix left-over cases of token.get_tag(), which was renamed #2815
  • remove wrong boolean check for loading datasets RE_ENGLISH_CONLL04 #2779
  • added missing property decorator in PooledFlairEmbeddings #2744 (already in flair 0.11.3)
  • fix wrong initialisations of label (where data_type was missing) #2731 (already in flair 0.11.3)
  • update gdown requirement, fix download for dataset NER_MULTI_WIKIANN #2757 (already in flair 0.11.3)
  • make Span detection more robust #2752 (already in flair 0.11.3)

v0.11

2 years ago

Release 0.11 is taking us ever closer to that 1.0 release! This release makes large internal refactorings and code quality / efficiency improvements to prepare Flair 1.0. We also add new features such as text clustering, a regular expression tagger, more dataset manipulation options, and some preview features like a prototype decoder.

New Features

Regular Expression Tagger (#2533)

You can now do sequence labeling in Flair with regular expressions! Simply define a RegexpTagger and add some regular expressions, like in the example below:

# sentence with a number and two quotes
sentence = Sentence('Figure 11 is both "too colorful" and "not informative enough".')

# instantiate regex tagger with a quote matching pattern
tagger = RegexpTagger(mapping=(r'(["\'])(?:(?=(\\?))\2.)*?\1', 'QUOTE'))

# also add a number mapping
tagger.register_labels(mapping=(r'\b\d+\b', 'NUMBER'))

# tag sentence
tagger.predict(sentence)

# check out matches
for entity in sentence.get_labels():
    print(entity)

Clustering with Flair (#2573 #2619)

Flair now supports clustering by ways of sklearn. Embed your sentences with a pre-trained embedding like below, then cluster then with any algorithm. Check the example below where we use sentence transformers and k-means clustering. A 'trained' clustering model can be saved and loaded for prediction, just like and other Flair classifier:

from sklearn.cluster import KMeans

from flair.data import Sentence
from flair.datasets import TREC_6
from flair.embeddings import SentenceTransformerDocumentEmbeddings
from flair.models import ClusteringModel

embeddings = SentenceTransformerDocumentEmbeddings()
# store all embeddings in memory which is required to perform clustering
corpus = TREC_6(memory_mode='full').downsample(0.05)

clustering_model = ClusteringModel(model=KMeans(n_clusters=6), embeddings=embeddings)

# fit the model on a corpus
clustering_model.fit(corpus)

# save the model
clustering_model.save(model_file="clustering_model.pt")

# load saved clustering model
model = ClusteringModel.load(model_file="clustering_model.pt")

# make example sentence
sentence = Sentence('Getting error in manage categories - not found for attribute "navigation _ column"')

# predict for sentence
model.predict(sentence)

# print sentence with prediction
print(sentence)

Dataset Manipulations

You can now change label names, ignore labels and add custom preprocessing when loading a dataset.

For instance, the standard WNUT_17 dataset comes with 7 NER labels:

corpus = WNUT_17(in_memory=False)
print(corpus.make_label_dictionary('ner'))

which prints:

Dictionary with 7 tags: <unk>, person, location, group, corporation, product, creative-work

With the following code, you rename some labels ('person' is renamed to 'PER'), merge 2 labels into 1 ('group' and 'corporation' are merged into 'LOC'), and ignore 2 other labels ('creative-work' and 'product' are ignored):

corpus = WNUT_17(in_memory=False, label_name_map={
    'person': 'PER',
    'location': 'LOC',
    'group': 'ORG',
    'corporation': 'ORG',
    'product': 'O',
    'creative-work': 'O', # by renaming to 'O' this tag gets ignored
})

which prints:

Dictionary with 4 tags: <unk>, PER, LOC, ORG

You can manipulate the data even more with custom preprocessing functions. See the example in #2708.

Other New Features and Data Sets

  • A new WordTagger class for simple word-level predictions (#2607)
  • Classic WordEmbeddings can now be fine-tuned in Flair (#2491) by setting fine_tune=True. Also adds fine-tuning mode of https://arxiv.org/abs/2110.02861 which seem to "reduce gradient variance that comes from the highly non-uniform distribution of input tokens"
  • Add NER_MULTI_CONER Dataset (#2507)
  • Add support for HIPE 2022 (#2675)
  • Allow trainer to work with mutliple learning rates (#2641)
  • Update hyperparameter tuning (#2633)

Preview Features

Some preview features in beta stage, use at your own risk.

Prototypical networks in Flair (#2627)

Prototype networks learn prototypes for each target class. For each data point to be classified, the network predicts a vector in class-prototype-space, which is then compared to all class prototypes.The prediction is then the closest class prototype. See paper Prototypical Networks for Few-shot Learning for more info.

@plonerma implemented a custom decoder that can be added to any Flair model that inherits from DefaultClassifier (i.e. early all Flair models). For instance, use this script:

from flair.data import Corpus
from flair.datasets import UP_ENGLISH
from flair.embeddings import TransformerWordEmbeddings
from flair.models import WordTagger
from flair.nn import PrototypicalDecoder
from flair.trainers import ModelTrainer

# what tag do we want to predict?
tag_type = 'frame'

# get a corpus
corpus: Corpus = UP_ENGLISH().downsample(0.1)

# make the tag dictionary from the corpus
tag_dictionary = corpus.make_label_dictionary(label_type=tag_type)

# initialize simple embeddings
embeddings = TransformerWordEmbeddings(model="distilbert-base-uncased",
                                       fine_tune=True,
                                       layers='-1')

# initialize prototype decoder
decoder = PrototypicalDecoder(num_prototypes=len(tag_dictionary),
                              embeddings_size=embeddings.embedding_length,
                              distance_function='euclidean',
                              normal_distributed_initial_prototypes=True,
                              )

# initialize the WordTagger, but pass the prototype decoder
tagger = WordTagger(embeddings,
                    tag_dictionary,
                    tag_type,
                    decoder=decoder)

# initialize trainer
trainer = ModelTrainer(tagger, corpus)

# run training
trainer.fine_tune('resources/taggers/prototypical_decoder')

Other Beta features

  • Dependency Parsing in Flair (#2486 #2579)
  • Lemmatization in Flair (#2531)
  • Initial implementation of JsonCorpora and Datasets (#2653)

Major Refactorings

With Flair expanding to many new NLP tasks (relation extraction, entity linking, etc.) and model types, we made a number of refactorings to reduce redundancy and make it easier to extend Flair.

Major refactoring of Label Logic in Flair (#2607 #2609 #2645)

The labeling logic was growing too complex to accommodate new tasks. With this release, we refactored this logic such that complex label classes like SpanLabel, RelationLabel etc. are removed in favor of a single Label class for all types of label. The Sentence object will now be automatically aware of all labels added to it.

To illustrate the difference, consider a before-and-after of how to add an entity label to a sentence.

Before:

# example sentence
sentence = Sentence("Humboldt Universität zu Berlin is located in Berlin .")

# create span for "Humboldt Universität zu Berlin"
span = Span(sentence[0:4])

# make a Span-label
span_label = SpanLabel(span=span, value='University')

# add Span-label to sentence
sentence.add_complex_label(typename='ner',  label=span_label)

Now:

# example sentence
sentence = Sentence("Humboldt Universität zu Berlin is located in Berlin .")

# directly add a label to the span "Humboldt Universität zu Berlin"
sentence[0:4].add_label("ner", "Organization")

So you can now just get a span from the sentence and add a label to it directly. It will get registered on the sentence as well.

Refactoring of printouts (#2704)

We changed and unified printouts across all Flair data points and labels, and updated the documentation to reflect this. Printouts should hopefully now be more concise. Let us know what you think.

Unified classes to reduce redundancy

Next to too many Label classes (see above), we also had too many corpora that essentially do the same thing, two partially overlapping transformer embedding classes and too much redundancy in our tokenization classes. This release makes many refactorings to make the code more maintainable:

  • Unify Corpora (#2607): Unifies several corpora into a single object. Before, we had ColumnCorpus, UniversalDependenciesCorpus, CoNNLuCorpus, and EntityLinkingCorpus, which resulted in too much redundancy. Now, there is only the ColumnCorpus for all such datasets
  • Unify Transformer Embeddings (#2558, #2584, #2586): There was too much redundancy and inconsistency between the two Transformer-based embeddings classes TransformerWordEmbedding and TransformerDocumentEmbedding. Thanks to @helpmefindaname, they now both inherit from the same base object and now share all features.
  • Unify Tokenizers (#2607) : The Tokenizer classes no longer return lists of Token, rather lists of strings that the Sentence object converts to tokens, centralizing the offset and whitespace_after detection in one place.

Simplifications to DefaultClassifier

The DefaultClassifier is the base class for nearly all models in Flair. With this release, we make a number of simplifications to reduce redundancy across classes and make it more modular.

  • forward_pass simplified to return 3 instead of 4 arguments
  • forward_pass returns embeddings instead of logits allowing us to easily switch out the decoder (see Beta feature on Prototype Networks below)
  • removed the unintuitive spawn logic we no longer need due to Label refactoring
  • unify dropouts across all classes (#2669)

Sequence tagger refactoring (#2361 #2550, #2561,#2564, #2585, #2565)

Major refactoring of SequenceTagger for better modularity and code readability.

Refactoring of Span Logic (#2607 #2609 #2645)

Spans are no longer stored as word-level 'bioes' tags, but rather directly stored as span-level annotations. The SequenceTagger will still internally use BIO/BIOES tags, but the corpora and sentences no longer explicitly store this information.

So you now choose the labeling format when instantiating the SequenceTagger, i.e.:

    tagger = SequenceTagger(
        hidden_size=256,
        embeddings=embeddings,
        tag_dictionary=tag_dictionary,
        tag_type="ner",
        tag_format="BIOES", # choose if you want to use BIOES or BIO internally
    )

Internally, this refactoring makes a number of changes and simplifications:

  • a number of fields have been added or moved up to the DataPoint class, for convenience, including properties to get start_position and end_position of datapoints, their text, their tag and score (if they have only one tag) and an unlabeled_identifier
  • moves up set_embedding() and to() from the data point classes (Sentence, Token, etc.) to their parent DataPoint
  • a number of methods like get_tag and add_tag have been removed from Token in favor of the get_label and add_label method of the parent DataPoint class
  • The ColumnCorpus will automatically identify which columns are span labels and treat them accordingly

Code Quality Checks (#2611)

They are back and more strict than ever! Thanks to @helpmefindaname, we now include mypy and formatting tests as part of our build process, which lead to many changes in the code and a much greater chance at catching errors early.

Speed and Memory Improvements:

  • EntityLinker class refactored for speed (#2607)
  • Performance improvements in standard evaluate() method, especially for large datasets (#2607)
  • ColumnCorpus no longer does disk reads when in_memory=False, it simply stores the raw data in memory leading to significant speed-ups on large datasets (#2607)
  • Memory management improvements for embeddings (#2645)
  • Efficiency improvements for WordEmbeddings (#2491) and OneHotEmbeddings (#2490)

Bug Fixes and Improvements

  • Add equality method to Dictionary (#2532)
  • Fix encoding error in lemmatizer (#2539)
  • Fixed printing and logging inconsistencies. (#2665)
  • Readme (#2525 #2618 #2617 #2662)
  • Fix bug in WSD_UFSAC corpus (#2521)
  • change position of model saving in between epochs (#2548)
  • Fix loss weights in TextPairClassifier and RelationExtractor models (#2576)
  • Fix token positions on column corpus (#2440)
  • long sequence transformers of any kind (#2599)
  • The deprecated data_fetcher is finally removed (#2607)
  • Small lm training improvements (#2590)
  • Remove minor bug in NEL_ENGLISH_AIDA corpus (#2615)
  • Fix module import bug (#2616)
  • Fix reloading fast tokenizers (#2622)
  • Fix two small bugs (#2634)
  • Fix .pre-commit-config.yaml (#2651)
  • patch the missing document_delmiter for lm.get_state() (#2658)
  • DocumentPoolEmbeddings class can now be instantiated only with a single embedding (#2645)
  • You can now specify a min_count when computing the label dictionary. Labels below that count will be UNK'ed. (e.g. tag_dictionary = corpus.make_label_dictionary("ner", min_count=10)) (#2607)
  • The Dictionary will now compute count statistics for labels in a corpus (#2607)
  • The ColumnCorpus can now handle relation annotation, dependency tree information and UD feats and misc (#2607)
  • Embeddings are stored as a torch Embedding instead of a gensim keyedvector. That way it will never come to version issues, if gensim doesn't ensure backwards compatibility
  • Make transformer offset calculation more robust (#2714)

v0.10

2 years ago

This release adds several new features such as in-built "model cards" for all Flair models, the first pre-trained models for Relation Extraction, better support for fine-tuning and a refactoring of the model training methods for more flexibility. It also fixes a number of critical bugs that were introduced by the refactorings in Flair 0.9.

Model Trainer Enhancements

Breaking change: We changed the ModelTrainer such that you now no longer pass the optimizer during initialization. Rather, it is now passed as a parameter of the train or fine_tune method.

Old syntax:

# 1. initialize trainer with AdamW optimizer
trainer = ModelTrainer(classifier, corpus, optimizer=torch.optim.AdamW)

# 2. run training with small learning rate and mini-batch size
trainer.train('resources/taggers/question-classification-with-transformer',
              learning_rate=5.0e-5,
              mini_batch_size=4,
             )

New syntax (optimizer is parameter of train method):

# 1. initialize trainer 
trainer = ModelTrainer(classifier, corpus)

# 2. run training with AdamW, small learning rate and mini-batch size
trainer.train('resources/taggers/question-classification-with-transformer',
              learning_rate=5.0e-5,
              mini_batch_size=4,
              optimizer=torch.optim.AdamW,
             )

Convenience function for fine-tuning (#2439)

Adds a fine_tune routine that sets default parameters used for fine-tuning (AdamW optimizer, small learning rate, few epochs, cyclic learning rate scheduling, etc.). Uses the new linear scheduler with warmup (#2415).

New syntax with fine_tune method:

from flair.data import Corpus
from flair.datasets import TREC_6
from flair.embeddings import TransformerDocumentEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer

# 1. get the corpus
corpus: Corpus = TREC_6()

# 2. what label do we want to predict?
label_type = 'question_class'

# 3. create the label dictionary
label_dict = corpus.make_label_dictionary(label_type=label_type)

# 4. initialize transformer document embeddings (many models are available)
document_embeddings = TransformerDocumentEmbeddings('distilbert-base-uncased', fine_tune=True)

# 5. create the text classifier
classifier = TextClassifier(document_embeddings, label_dictionary=label_dict, label_type=label_type)

# 6. initialize trainer
trainer = ModelTrainer(classifier, corpus)

# 7. run training with fine-tuning
trainer.fine_tune('resources/taggers/question-classification-with-transformer',
                  learning_rate=5.0e-5,
                  mini_batch_size=4,
                  )

Model Cards (#2457)

When you train any Flair model, a "model card" will now automatically be saved that stores all training parameters and versions used to train this model. Later when you load a Flair model, you can print the model card and understand how the model was trained.

The following example trains a small POS-tagger and prints the model card in the end:

# initialize corpus and make label dictionary for POS tags
corpus = UD_ENGLISH().downsample(0.01)
tag_type = "pos"
tag_dictionary = corpus.make_label_dictionary(tag_type)

# simple sequence tagger
tagger = SequenceTagger(hidden_size=256,
                        embeddings=WordEmbeddings("glove"),
                        tag_dictionary=tag_dictionary,
                        tag_type=tag_type)

# initialize model trainer and experiment path
trainer = ModelTrainer(tagger, corpus)
path = f'resources/taggers/model-card'

# train for a few epochs
trainer.train(path,
              max_epochs=20,
              )

# load best model and print "model card"
trained_model = SequenceTagger.load(path + '/best-model.pt')
trained_model.print_model_card()

This should print a model card like:

------------------------------------
--------- Flair Model Card ---------
------------------------------------
- this Flair model was trained with:
-- Flair version 0.9
-- PyTorch version 1.7.1
-- Transformers version 4.8.1
------------------------------------
------- Training Parameters: -------
------------------------------------
-- base_path = resources/taggers/model-card
-- learning_rate = 0.1
-- mini_batch_size = 32
-- mini_batch_chunk_size = None
-- max_epochs = 20
-- train_with_dev = False
-- train_with_test = False
[... shortened ...]
------------------------------------

Resume training any model (#2457)

Previously, we distinguished between checkpoints and model files. Now all models can function as checkpoints, meaning you can load them and continue training them. Say you want to load the model above (trained to epoch 20) and continue training it to epoch 25. Do it like this:

# resume training best model, but this time until epoch 25
trainer.resume(trained_model,
               base_path=path + '-resume',
               max_epochs=25,
               )

Pass optimizer and scheduler instance

You can also now pass an initialized optimizer and scheduler to the train and fine_tune methods.

Multi-Label Predictions and Confidence Threshold in TARS models (#2430)

Adding the possibility to set confidence thresholds on multi-label prediction in TARS, and setting whether a problem is single-label or multi-label:

from flair.models import TARSClassifier
from flair.data import Sentence

# 1. Load our pre-trained TARS model for English
tars: TARSClassifier = TARSClassifier.load('tars-base')

# switch to a multi-label task (emotion detection)
tars.switch_to_task('GO_EMOTIONS')

# sentence with two emotions
sentence = Sentence("I am happy and sad")

# predict normally
tars.predict(sentence)
print(sentence)

# predict with lower label threshold (you can set this to 0. to get all labels)
tars.predict(sentence, label_threshold=0.01)
print(sentence)

# predict and enforce a single-label prediction
tars.predict(sentence, label_threshold=0.01, multi_label=False)
print(sentence)

Relation Extraction ( #2471 #2492)

We refactored the RelationExtractor for more options, hopefully better code clarity and small speed improvements.

We also added two few relation extraction models, trained over a modified version of TACRED: relations and relations-fast. To use these models, you also need an entity tagger. The tagger identifies entities, then the relation extractor possible entities.

For instance use this code:

from flair.data import Sentence
from flair.models import RelationExtractor, SequenceTagger

# 1. make example sentence
sentence = Sentence("George was born in Washington")

# 2. load entity tagger and predict entities
tagger = SequenceTagger.load('ner-fast')
tagger.predict(sentence)

# check which entities have been found in the sentence
entities = sentence.get_labels('ner')
for entity in entities:
    print(entity)

# 3. load relation extractor
extractor: RelationExtractor = RelationExtractor.load('relations-fast')

# predict relations
extractor.predict(sentence)

# check which relations have been found
relations = sentence.get_labels('relation')
for relation in relations:
    print(relation)

Embeddings

  • Refactoring of WordEmbeddings to avoid gensim version issues and enable further fine-tuning of pre-trained embeddings (#2491)
  • Refactoring of OneHotEmbeddings to fix errors caused by some corpora and enable "stable embeddings" (#2490 )

Other Enhancements and Bug Fixes

  • Compatibility with gensim 4 and Python 3.9 (#2496)
  • Fix TransformerWordEmbeddings if model_max_length not set in Tokenizer (#2502)
  • Fix TransformerWordEmbeddings handling of lang ids (#2417)
  • Fix attention mask for special Transformer architectures (#2485)
  • Fix regression model (#2424)
  • Fix problems caused by refactoring of Dictionary (#2429 #2435 #2453)
  • Fix infinite loop in Span::to_original_text (#2462)
  • Fix result object in ModelTrainer (#2519)
  • Fix bug in wsd_ufsac corpus (#2521)
  • Fix bugs in TARS and simple sequence tagger (#2468)
  • Add Amharic FLAIR EMBEDDING model (#2494)
  • Add MultiCoNer Dataset (#2507)
  • Add Korean Flair Tutorials (#2516 #2517)
  • Remove hyperparameter features (#2518)
  • Make it optional to create logfiles and loss files (#2421)
  • Small simplification of TransformerWordEmbeddings (#2425)

v0.9

2 years ago

With release 0.9 we are refactoring Flair for simplicity and speed, to make Flair faster and more easily scale to new NLP tasks. The first new tasks included in this release are Relation Extraction (RE), support for GLUE benchmark tasks and Entity Linking - all in beta for early adopters! We're working towards a Flair 1.0 release that will span the whole suite of standard NLP tasks. Also included is a new approach for Zero-Shot Sequence Labeling based on TARS! This release also includes a wealth of new datasets for all these tasks and tons of other new features and bug fixes.

Zero-Shot Sequence Labeling with TARS (#2260)

We extend the TARS zero-shot learning approach to sequence labeling and ship a pre-trained model for English NER. Try defining some classes and see if the model can find them:

# 1. Load zero-shot NER tagger
tars = TARSTagger.load('tars-ner')

# 2. Prepare some test sentences
sentences = [
    Sentence("The Humboldt University of Berlin is situated near the Spree in Berlin, Germany"),
    Sentence("Bayern Munich played against Real Madrid"),
    Sentence("I flew with an Airbus A380 to Peru to pick up my Porsche Cayenne"),
    Sentence("Game of Thrones is my favorite series"),
]

# 3. Define some classes of named entities such as "soccer teams", "TV shows" and "rivers"
labels = ["Soccer Team", "University", "Vehicle", "River", "City", "Country", "Person", "Movie", "TV Show"]
tars.add_and_switch_to_new_task('task 1', labels, label_type='ner')

# 4. Predict for these classes and print results
for sentence in sentences:
    tars.predict(sentence)
    print(sentence.to_tagged_string("ner"))

This should print:

The Humboldt <B-University> University <I-University> of <I-University> Berlin <E-University> is situated near the Spree <S-River> in Berlin <S-City> , Germany <S-Country>

Bayern <B-Soccer Team> Munich <E-Soccer Team> played against Real <B-Soccer Team> Madrid <E-Soccer Team>

I flew with an Airbus <B-Vehicle> A380 <E-Vehicle> to Peru <S-City> to pick up my Porsche <B-Vehicle> Cayenne <E-Vehicle>

Game <B-TV Show> of <I-TV Show> Thrones <E-TV Show> is my favorite series

So in these examples, we are finding entity classes such as "TV show" (Game of Thrones), "vehicle" (Airbus A380 and Porsche Cayenne), "soccer team" (Bayern Munich and Real Madrid) and "river" (Spree), even though the model was never explicitly trained for this. Note that this is ongoing research and the examples are a bit cherry-picked. We expect the zero-shot model to improve quite a bit until the next release.

New NLP Tasks and Datasets

We prototypically now support new tasks such as GLUE benchmark, Relation Extraction and Entity Linking. With this, we ship the datasets and model classes you need to train your own models. But we are still tweaking both methods, meaning that we don't ship any pre-trained models as-of-yet.

GLUE Benchmark (#2149 #2363)

A standard benchmark to evaluate progress in language understanding, mostly consisting of single and pairwise sentence classification tasks.

New datasets in Flair:

  • 'GLUE_COLA' - The Corpus of Linguistic Acceptability from GLUE benchmark
  • 'GLUE_MNLI' - The Multi-Genre Natural Language Inference Corpus from the GLUE benchmark
  • 'GLUE_RTE' - The RTE task from the GLUE benchmark
  • 'GLUE_QNLI' - The Stanford Question Answering Dataset formated as NLI task from the GLUE benchmark
  • 'GLUE_WNLI' - The Winograd Schema Challenge formated as NLI task from the GLUE benchmark
  • 'GLUE_MRPC' - The MRPC task from GLUE benchmark
  • 'GLUE_QQP' - The Quora Question Pairs dataset where the task is to determine whether a pair of questions are semantically equivalent

Initialize datasets like so:

from flair.datasets import GLUE_QNLI

# load corpus
corpus = GLUE_QNLI()

# print corpus
print(corpus)

# print first sentence-pair of training data split
print(corpus.train[0])

# print all labels in corpus
print(corpus.make_label_dictionary("entailment"))

Relation Extraction (#2333 #2352)

Relation extraction classifies if and which relationship holds between two entities in a text.

Model class: RelationExtractor

Datasets in Flair:

Initialize datasets like so:

# initalize CoNLL 04 corpus for Relation extraction
corpus = RE_ENGLISH_CONLL04()
print(corpus)

# print first sentence of training split with annotations
sentence = corpus.train[0]

# print label dictionary
label_dict = corpus.make_label_dictionary("relation")
print(label_dict)

Entity Linking (#2375)

Entity Linking goes one step further than NER and uniquely links entities to knowledge bases such as Wikipedia.

Model class: EntityLinker

Datasets in Flair:

from flair.datasets import NEL_ENGLISH_REDDIT

# load corpus
corpus = NEL_ENGLISH_REDDIT()

# print corpus
print(corpus)

# print a sentence of training data split
print(corpus.train[3])

New NER Datasets

Other datasets

  • 'YAHOO_ANSWERS' - The 10 largest main categories from the Yahoo! Answers (#2198)
  • Various Universal Dependencies datasets (#2211, #2216, #2219, #2221, #2244, #2245, #2246, #2247, #2223, #2248, #2235, #2236, #2239, #2226)

New Functionality

Support for Arabic NER (#2188)

Flair now supports NER and POS tagging for Arabic. To tag an Arabic sentence, just load the appropriate model:


# load model
tagger = SequenceTagger.load('ar-ner')

# make Arabic sentence
sentence = Sentence("احب برلين")

# predict NER tags
tagger.predict(sentence)

# print sentence with predicted tags
for entity in sentence.get_labels('ner'):
    print(entity)

This should print:

LOC [برلين (2)] (0.9803) 

More flexibility on main metric (#2161)

When training models, you can now chose any standard evaluation metric for model selection (previously it was fixed to micro F1). When calling the trainer, simply pass the desired metric as main_evaluation_metric like so:

trainer.train('resources/taggers/your_model',
              learning_rate=0.1,
              mini_batch_size=32,
              max_epochs=10,
              main_evaluation_metric=("macro avg", 'f1-score'),
              )

In this example, we now use macro F1 instead of the default micro F1.

Add handling for mapping labels to 'O' #2254

In ColumnDataset, labels can be remapped to other labels. But sometimes you may not wish to use all label types in a given dataset. You can now remap them to 'O' and so exclude them.

For instance, to load CoNLL-03 without MISC, do:

corpus = CONLL_03(
    label_name_map={'MISC': 'O'}
)
print(corpus.make_label_dictionary('ner'))
print(corpus.train[0].to_tagged_string('ner'))

Other

  • add per-label thresholds for prediction (#2366)

  • add support for Spanish clinical Flair embeddings (#2323)

  • added 'mean', 'max' pooling strategy for TransformerDocumentEmbeddings class (#2180)

  • new DocumentCNNEmbeddings class to embed text with a trainable CNN (#2141)

  • allow negative examples in ClassificationCorpus (#2233)

  • added new parameter to save model each k epochs during training (#2146)

  • log epoch of best model instead of printing it during training (#2286)

  • add option to exclude specific sentences from dataset (#2262)

  • improved tensorboard logging (#2164)

  • return predictions during evaluation (#2162)

Internal Refactorings

Refactor for simplicity and extensibility (#2333 #2351 #2356 #2377 #2379 #2382 #2184)

In order to accommodate all these new NLP task types (plus many more in the pipeline), we restructure the flair.nn.Model class such that most models now inherit from DefaultClassifier. This removes many redundancies as most models do classification and are really only different in what they classify and how they apply embeddings. Models that inherit from DefaultClassifier need only implement the method forward_pass, making each model class only a few lines of code.

Check for instance our implementation of the RelationExtractor class to see how easy it now is to add a new tasks!

Refactor for speed

  • Flair models trained with transformers (such as the FLERT models) were previously not making use of mini-batching, greatly slowing down training and application of such models. We refactored the TransformerWordEmbeddings class, yielding significant speed-ups depending on the mini-batch size used. We observed speed-ups from x2 to x6. (#2385 #2389 #2384)

  • Improve training speed of Flair embeddings (#2203)

Bug fixes & improvements

  • fixed references to multi-X-fast Flair embedding models (#2150)
  • fixed serialization of DocumentRNNEmbeddings (#2155)
  • fixed separator in cross-attention mode (#2156)
  • fixed ID for Slovene word embeddings in the doc (#2166)
  • close log_handler after training is complete. (#2170)
  • fixed bug in IMDB dataset (#2172)
  • fixed IMDB data splitting logic (#2175)
  • fixed XLNet and Transformer-XL Execution (#2191)
  • remove unk token from Ner labeling (#2225)
  • fxed typo in property name (#2267)
  • fixed typos (#2303 #2373)
  • fixed parallel corpus (#2306)
  • fixed SegtokSentenceSplitter Incorrect Sentence Position Attributes (#2312)
  • fixed loading of old serialized models (#2322)
  • updated url for BioSemantics corpus (#2327)
  • updated requirements (#2346)
  • serialize multi_label_threshold for classification models (#2368)
  • small refactorings in ModelTrainer (#2184)
  • moving Path construction of flair.cache_root (#2241)
  • documentation improvement (#2304)
  • add model fit tests #2378

v0.8

3 years ago

Release 0.8 adds major new features to Flair, including our best named entity recognition (NER) models yet and the ability to host, share and test Flair models on the HuggingFace model hub! In addition, there is a host of improvements, new features and new datasets to check out!

FLERT (#2031 #2032 #2104)

This release adds the "FLERT" approach to train sequence tagging models using cross-sentence features as presented in our recent paper. This yields new state-of-the-art models which we include in Flair, as well as the features to easily train your own "FLERT" models.

Pre-trained FLERT models (#2130)

We add 5 new NER models for English (4-class and 18-class), German, Dutch and Spanish (4-class each). Load for instance with:

from flair.data import Sentence
from flair.models import SequenceTagger

# load tagger
tagger = SequenceTagger.load("ner-large")

# make example sentence
sentence = Sentence("George Washington went to Washington")

# predict NER tags
tagger.predict(sentence)

# print sentence
print(sentence)

# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
    print(entity)

If you want to test these models in action, for instance the new large English Ontonotes model with 18 classes, you can now use the hosted inference API on the HF model hub, like here.

Contextualized Sentences

In order to enable cross-sentence context, we made some changes to the Sentence object and data readers:

  1. Sentence objects now have next_sentence() and previous_sentence() methods that are set automatically if loaded through ColumnCorpus. This is a pointer system to navigate through sentences in a corpus:
# load corpus
corpus = MIT_MOVIE_NER_SIMPLE(in_memory=False)

# get a sentence
sentence = corpus.test[123]
print(sentence)
# get the previous sentence
print(sentence.previous_sentence())
# get the sentence after that
print(sentence.next_sentence())
# get the sentence after the next sentence
print(sentence.next_sentence().next_sentence())

This allows dynamic computation of contexts in the embedding classes.

  1. Sentence objects now have the is_document_boundary field which is set through the ColumnCorpus. In some datasets, there are sentences like "-DOCSTART-" that just indicate document boundaries. This is now recorded as a boolean in the object.

Refactored TransformerWordEmbeddings (breaking)

TransformerWordEmbeddings refactored for dynamic context, robustness to long sentences and readability. The names of some constructor arguments have changed for clarity: pooling_operation is now subtoken_pooling (to make clear that we pool subtokens), use_scalar_mean is now layer_mean (we only do a simple layer mean) and use_context can now optionally take an integer to indicate the length of the context. Default arguments are also changed.

For instance, to create embeddings with a document-level context of 64 subtokens, init like this:

embeddings = TransformerWordEmbeddings(
    model='bert-base-uncased',
    layers="-1",
    subtoken_pooling="first",
    fine_tune=True,
    use_context=64,
)

Train your Own FLERT Models

You can train a FLERT-model like this:

import torch

from flair.data import Sentence
from flair.datasets import CONLL_03, WNUT_17
from flair.embeddings import TransformerWordEmbeddings, DocumentPoolEmbeddings, WordEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer


corpus = CONLL_03()

use_context = 64
hf_model = 'xlm-roberta-large'

embeddings = TransformerWordEmbeddings(
    model=hf_model,
    layers="-1",
    subtoken_pooling="first",
    fine_tune=True,
    use_context=use_context,
)

tag_dictionary = corpus.make_tag_dictionary('ner')

# init bare-bones tagger (no reprojection, LSTM or CRF)
tagger: SequenceTagger = SequenceTagger(
    hidden_size=256,
    embeddings=embeddings,
    tag_dictionary=tag_dictionary,
    tag_type='ner',
    use_crf=False,
    use_rnn=False,
    reproject_embeddings=False,
)

# train with XLM parameters (AdamW, 20 epochs, small LR)
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
from torch.optim.lr_scheduler import OneCycleLR

context_string = '+context' if use_context else ''

trainer.train(f"resources/flert",
              learning_rate=5.0e-6,
              mini_batch_size=4,
              mini_batch_chunk_size=1,
              max_epochs=20,
              scheduler=OneCycleLR,
              embeddings_storage_mode='none',
              weight_decay=0.,
              )

We recommend training FLERT this way if accuracy is by far the most important feature you need. FLERT is quite slow since it works on the document-level.

HuggingFace model hub integration (#2040 #2108 #2115)

We now host Flair sequence tagging models on the HF model hub (thanks for all the support @HuggingFace!).

Overview of all models. There is a dedicated 'Flair' tag on the hub, so to get a list of all Flair models, check here.

The hub allows all users to upload and share their own models. Even better, you can enable the Inference API and so test all models online without downloading and running them. For instance, you can test our new very powerful English 18-class NER model here.

To load any sequence tagger on the model hub, use the string identifier when instantiating a model. For instance, to load our English ontonotes model with the id "flair/ner-english-ontonotes-large", do

from flair.data import Sentence
from flair.models import SequenceTagger

# load tagger
tagger = SequenceTagger.load("flair/ner-english-ontonotes-large")

# make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")

# predict NER tags
tagger.predict(sentence)

# print sentence
print(sentence)

# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
    print(entity)

Other New Features

New Task: Recognizing Textual Entailment (#2123)

Thanks to @marcelmmm we now support training textual entailment tasks (in fact, all pairwise sentence classification tasks) in Flair.

For instance, if you want to train an RTE task of the GLUE benchmark use this script:

import torch

from flair.data import Corpus
from flair.datasets import GLUE_RTE
from flair.embeddings import TransformerDocumentEmbeddings

# 1. get the entailment corpus
corpus: Corpus = GLUE_RTE()

# 2. make the tag dictionary from the corpus
label_dictionary = corpus.make_label_dictionary()

# 3. initialize text pair tagger
from flair.models import TextPairClassifier

tagger = TextPairClassifier(
    document_embeddings=TransformerDocumentEmbeddings(),
    label_dictionary=label_dictionary,
)

# 4. train trainer with AdamW
from flair.trainers import ModelTrainer

trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)

# 5. run training
trainer.train('resources/taggers/glue-rte-english',
              learning_rate=2e-5,
              mini_batch_chunk_size=2, # this can be removed if you hae a big GPU
              train_with_dev=True,
              max_epochs=3)

Add possibility to specify empty label name to CSV corpora (#2068)

Some CSV classification datasets contain a value that means "no class". We now extend the CSVClassificationDataset so that it is possible to specify which value should be skipped using the no_class_label argument.

For instance:

# load corpus
corpus = CSVClassificationCorpus(
    data_folder='resources/tasks/code/',
    train_file='java_io.csv',
    skip_header=True,
    column_name_map={3: 'text', 4: 'label', 5: 'label', 6: 'label', 7: 'label', 8: 'label', 9: 'label'},
    no_class_label='NONE',
)

This causes all entries of NONE in one of the label columns to be skipped.

More options for splits in corpora and training (#2034)

For various reasons, we might want to have a Corpus that does not define all three splits (train/dev/test). For instance, we might want to train a model over the entire dataset and not hold out any data for validation/evaluation.

We add several ways of doing so.

  1. If a dataset has predefined splits, like most NLP datasets, you can pass the arguments train_with_test and train_with_dev to the ModelTrainer. This causes the trainer to train over all three splits (and do no evaluation):
trainer.train(f"path/to/your/folder",
    learning_rate=0.1,
    mini_batch_size=16,
    train_with_dev=True,
    train_with_test=True,
)
  1. You can also now create a Corpus with fewer splits without having all three splits automatically sampled. Pass sample_missing_splits=False as argument to do this. For instance, to load SemCor WSD corpus only as training data, do:
semcor = WSD_UFSAC(train_file='semcor.xml', sample_missing_splits=False, autofind_splits=False)

Add TFIDF Embeddings (#2086)

We added some old-school embeddings (thanks @yosipk), namely the legendary TF-IDF document embeddings. These are often good baselines, and additionally they keep NLP veterans nostalgic, if not happy.

To initialize these embeddings, you must pass the train split of your training corpus, i.e.

embeddings = DocumentTFIDFEmbeddings(corpus.train, max_features=10000)

This triggers the process where the most common words are used to featurize documents.

New Datasets

Hungarian NER Corpus (#2045)

Added the Hungarian business news corpus annotated with NER information (thanks to @alibektas).

# load Hungarian business NER corpus
corpus = BUSINESS_HUN()
print(corpus)
print(corpus.make_tag_dictionary('ner'))

StackOverflow NER Corpus (#2052)

# load StackOverflow business NER corpus
corpus = STACKOVERFLOW_NER()
print(corpus)
print(corpus.make_tag_dictionary('ner'))

Added GermEval 18 Offensive Language dataset (#2102)

# load StackOverflow business NER corpus
corpus = GERMEVAL_2018_OFFENSIVE_LANGUAGE()
print(corpus)
print(corpus.make_label_dictionary()

Added RTE corpora of GLUE and SuperGLUE

# load the recognizing textual entailment corpus of the GLUE benchmark
corpus = GLUE_RTE()
print(corpus)
print(corpus.make_label_dictionary()

Improvements

Allow newlines as Tokens in a Sentence (#2070)

Newlines and tabs can now become Tokens in a Sentence:

# make sentence with newlines and tabs
sentence: Sentence = Sentence(["I", "\t", "ich", "\n", "you", "\t", "du", "\n"], use_tokenizer=True)

# Alternatively: sentence: Sentence = Sentence("I \t ich \n you \t du \n", use_tokenizer=False)

# print sentence and each token
print(sentence)
for token in sentence:
    print(token)

Improve transformer serialization (#2046)

We improved the serialization of the TransformerWordEmbeddings class such that you can now train a model with one version of the transformers library and load it with another version. Previously, if you trained a model with transformers 3.5.1 and loaded it with 3.1.01, or trained with 3.5.1 and loaded with 4.1.1, or other version mismatches, there would either be errors or bad predictions.

Migration guide: If you have a model trained with an older version of Flair that uses TransformerWordEmbeddings you can save it in the new version-independent format by loading the model with the same transformers version you used to train it, and then saving it again. The newly saved model is then version-independent:

# load old model, but use the *same transformer version you used when training this model*
tagger = SequenceTagger.load('path/to/old-model.pt')

# save the model. It is now version-independent and can for instance be loaded with transformers 4.
tagger.save('path/to/new-model.pt')

Fix regression prediction errors (#2067)

Fix of two problems in the regression model:

  • the predict() method was unable to set labels and threw errors (see #2056)
  • predicted labels had no label name

Now, you can set a label name either in the predict method or during instantiation of the regression model you want to train. So the full code for training a regression model and using it to predict is:

# load regression dataset
corpus = WASSA_JOY()

# make simple document embeddings
embeddings = DocumentPoolEmbeddings([WordEmbeddings('glove')], fine_tune_mode='linear')

# init model and give name to label
model = TextRegressor(embeddings, label_name='happiness')

# target folder
output_folder = 'resources/taggers/regression_test/'

# run training
trainer = ModelTrainer(model, corpus)
trainer.train(
    output_folder,
    mini_batch_size=16,
    max_epochs=10,
)

# load model
model = TextRegressor.load(output_folder + 'best-model.pt')

# predict for sentence
sentence = Sentence('I am so happy')
model.predict(sentence)

# print sentence and prediction
print(sentence)

In my example run, this prints the following sentence + predicted value:

Sentence: "I am so happy"   [− Tokens: 4  − Sentence-Labels: {'happiness': [0.9239126443862915 (1.0)]}]

Do not shuffle first epoch during training (#2058)

Normally, we shuffle sentences at each epoch during training in the ModelTrainer class. However, in some cases it makes sense to see sentences in their natural order during the first epoch, and shuffle only from the second epoch onward.

Bug Fixes and Improvements

  • Update to transformers 4 (#2057)
  • Fix the evaluate() method in the SimilarityLearner class (#2113)
  • Fix memory memory leak in WordEmbeddings (#2018)
  • Add support for Transformer-XL Embeddings (#2009)
  • Restrict numpy version to <1.20 for Python 3.6 (#2014)
  • Small formatting and variable declaration changes (#2022)
  • Fix document boundary offsets for Dutch CoNLL-03 (#2061)
  • Changed the torch version in requirements.txt: Torch>=1.5.0 (#2063)
  • Fix linear input dimension if the reproject (#2073)
  • Various improvements for TARS (#2090 #2128)
  • Added a link to the interpret-flair repo (#2096)
  • Improve documentatin ( #2110)
  • Update sentencepiece and gdown version (#2131)
  • Add to_plain_string method to Span class (#2091)

v0.7

3 years ago

Release 0.7 adds major few-shot and zero-shot learning capabilities to Flair with our new TARS approach, plus support for the Universal Proposition Banks, new NER datasets and lots of other new features!

Few-Shot and Zero-Shot Classification with TARS (#1917 #1926)

With TARS we add a major new feature to Flair for zero-shot and few-shot classification. Details on the approach can be found in our paper Halder et al. (2020). Our approach allows you to classify text in cases in which you have little or even no training data at all.

This example illustrates how you predict new classes without training data:

# 1. Load our pre-trained TARS model for English
tars = TARSClassifier.load('tars-base')

# 2. Prepare a test sentence
sentence = flair.data.Sentence("I am so glad you liked it!")

# 3. Define some classes that you want to predict using descriptive names
classes = ["happy", "sad"]

#4. Predict for these classes
tars.predict_zero_shot(sentence, classes)

# Print sentence with predicted labels
print(sentence)

For a full overview of TARS features, please refer to our new TARS tutorial.

Other New Features

Option to set Flair seed (#1979)

Adds the possibility to set a seed via wrapping the Hugging Face Transformers library helper method (thanks @stefan-it).

By specifying a seed with:

import flair

flair.set_seed(42)

you can make experimental runs reproducible. The wrapped set_seed method sets seeds for random, numpy and torch. More details here.

Control multi-word behavior in UD datasets (#1981)

To better handle multi-words in UD corpora, we introduce the split_multiwords constructor argument to all UD corpora which by default is set to True. It controls the handling of multiwords that are split into different tokens. For instance the German "am" is split into two different tokens: "am" -> "an" + "dem". Or the French "aux" -> "a" + "les".

If split_multiwords is set to True, they are split as in UD. If set to False, we keep the original multiword as a single token. Example:

# default mode: multiwords are split
corpus = UD_GERMAN(split_multiwords=True)
# print sentence 179
print(corpus.dev[179].to_plain_string())

# alternative mode: multiwords are kept as original
corpus = UD_GERMAN(split_multiwords=False)
# print sentence 179
print(corpus.dev[179].to_plain_string())  

This prints

Ein Hotel zu dem Wohlfühlen.

Ein Hotel zum Wohlfühlen.

The latter is how it appears in text, the former is after splitting of multiwords.

Pass pretokenized sentence to Sentence object (#1965)

You can now pass pass a pretokenized sequence as list of words (thanks @ulf1):

from flair.data import Sentence
sentence = Sentence(['The', 'grass', 'is', 'green', '.'])
print(sentence)

This should print:

Sentence: "The grass is green ."   [− Tokens: 5]

Map label names in sequence labeling datasets (#1988)

You can now pass a label map to sequence labeling datasets to change label names (thanks @pharnisch).

# print tag dictionary with mapped names
corpus = CONLL_03_DUTCH(label_name_map={'PER': 'person', 'ORG': 'organization', 'LOC': 'location', 'MISC': 'other'})
print(corpus.make_tag_dictionary('ner'))

# print tag dictionary with original names
corpus = CONLL_03_DUTCH()
print(corpus.make_tag_dictionary('ner'))

Data Sets

Universal Proposition Banks (#1870 #1866 #1888)

Flair 0.7 adds support 7 Universal Proposition Banks to train your own multilingual semantic role labelers (thanks to @Dabendorf).

Load for instance with:

# load English Universal Proposition Bank
corpus = UP_ENGLISH()
print(corpus)

# make dictionary of frames
frame_dictionary = corpus.make_tag_dictionary('frame')
print(frame_dictionary)

Now available for Finnish, Chinese, Italian, French, German, Spanish and English

NER Corpora

We add support for 6 new NER corpora:

Arabic NER Corpus (#1901)

Added the ANER corpus for Arabic NER (thanks to @megantosh).

# load Arabic NER corpus
corpus = ANER_CORP()
print(corpus)

Movie NER Corpora (#1912)

Added the MIT movie reviews corpora annotated with NER information, in the simple and complex variant (thanks to @pharnisch).

# load simple movie NER corpus
corpus = MITMovieNERSimple()
print(corpus)
print(corpus.make_tag_dictionary('ner'))

# load complex movie NER corpus
corpus = MITMovieNERComplex()
print(corpus)
print(corpus.make_tag_dictionary('ner'))   

Added SEC Fillings NER corpus (#1922)

Added corpus of SEC fillings annotated with 4-class NER tags (thanks to @samahakk).

# load SEC fillings corpus
corpus = SEC_FILLINGS()
print(corpus)
print(corpus.make_tag_dictionary('ner'))

WNUT 2020 NER dataset support (#1942)

Added corpus of wet lab protocols annotated with NER information used for WNUT 2020 challenge (thanks to @aynetdia).

# load wet lab protocol data
corpus = WNUT_2020_NER()
print(corpus)
print(corpus.make_tag_dictionary('ner'))

Weibo NER dataset support (#1944)

Added dataset about NER for Chinese Social Media (thanks to @87302380).

# load Weibo NER data
corpus = WEIBO_NER()
print(corpus)
print(corpus.make_tag_dictionary('ner'))

Added Finnish NER corpus (#1946)

Added the TURKU corpus for Finnish NER (thanks to @melvelet).

# load Finnish NER data
corpus = TURKU_NER()
print(corpus)
print(corpus.make_tag_dictionary('ner'))

Universal Depdency Treebanks

We add support for 11 new UD treebanks:

  • Greek UD Treebank (#1933, thanks @malamasn)
  • Livvi UD Treebank (#1953, thanks @hebecked)
  • Naija UD Treebank (#1952, thanks @teddim420)
  • Buryat UD Treebank (#1954, thanks @MaxDall)
  • North Sami UD Treebank (#1955, thanks @dobbersc)
  • Maltese UD Treebank (#1957, thanks @phkuep)
  • Marathi UD Treebank (#1958, thanks @polarlyset)
  • Afrikaans UD Treebank (#1959, thanks @QueStat)
  • Gothic UD Treebank (#1961, thanks @wjSimon)
  • Old French UD Treebank (#1964, thanks @Weyaaron)
  • Wolof UD Treebank (#1967, thanks @LukasOpp)

Load each with language name, for instance:

# load Gothic UD treebank data
corpus = UD_GOTHIC()
print(corpus)
print(corpus.test[0])

Added GoEmotions text classification corpus (#1914)

Added GoEmotions dataset containing 58k Reddit comments labeled with 27 emotion categories. Load with:

# load GoEmotions corpus
corpus = GO_EMOTIONS()
print(corpus)
print(corpus.make_label_dictionary())

Enhancements and bug fixes

  • Add handling for micro-average precision and recall (#1935)
  • Make dev and test splits in treebanks optional (#1951)
  • Updated communicative functions model (#1857)
  • Biomedical Data: Explicit encodings for Windows Support (#1893)
  • Fix wrong abstract method (#1923 #1940)
  • Improve tutorial (#1939)
  • Fix requirements (#1971 )