🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
ModelOutput
instead of tuples in TransformerData.model_output
and FullTransformerBatch.model_output
. For backwards compatibility, the tuple format remains available under TransformerData.tensors
and FullTransformerBatch.tensors
. See more details in the transformer API docs.transformer_config
settings such as output_attentions
. Additional output is stored under TransformerData.model_output
. More details in the TransformerModel
docs.transformers
up to v4.11.x.transformer
components has changed in v1.1 and is not compatible with spacy-transformers
v1.0.x. Pipelines trained with v1.0.x can be loaded with v1.1.x, but pipelines saved with v1.1.x cannot be loaded with v1.0.x.TransformerData.tensors
and FullTransformerBatch.tensors
return a tuple instead of a list.@adrianeboyd, @bryant1410, @danieldk, @honnibal, @ines, @KennethEnevoldsen, @svlandeg
grad_factor
when replacing listeners.<4.10.0
trf_data
extension in Transformer.__call__
and Transformer.pipe
to support distributed processingThanks to @bryant1410 for the pull requests and contributions!
This release requires spaCy v3.
Transformer
component for easy pipeline integration.TransformerListener
to share transformer weights between components.spacy-transformers
is installed in the same environment.Transformer
: Pipeline component API reference🌙 This release is a pre-release and requires spaCy v3 (nightly).
TransformerListener.v1