Thinc Versions Save

๐Ÿ”ฎ A refreshing functional take on deep learning, compatible with your favorite libraries

v9.0.0

3 weeks ago

The main new feature of Thinc v9 is the support for learning rate schedules that can take the training dynamics into account. For example, the new plateau.v1 schedule scales the learning rate when no progress has been found after a given number of evaluation steps. Another visible change is that AppleOps is now part of Thinc, so it is not necessary anymore to install thinc-apple-ops to use the AMX units on Apple Silicon.

โœจ New features and improvements

  • Learning rate schedules can now take the training step as well as an arbitrary set of keyword arguments. This makes it possible to pass information such a the parameter name and last evaluation score to determine the learning rate (#804).
  • Added the plateau.v1 schedule (#842). This schedule scales the learning rate if training was found to be stagnant for a given period.
  • The functionality of thinc-apple-ops is integrated into Thinc (#927). Starting with this version of Thinc, it is not necessary anymore to install thinc-apple-ops.

๐Ÿ”ด Bug fixes

  • Fix the use of thread-local storage (#917).

โš ๏ธย Backwards incompatibilities

  • Thinc v9.0.0 only support Python 3.9 and later.
  • Schedules are not generators anymore, but implementations of the Schedule class (#804).
  • thinc.backends.linalg has been removed (#742). The same functionality is provided by implementations in BLAS that are better tested and more performant.
  • thinc.extra.search has been removed (#743). The beam search functionality in this module was strongly coupled to the spaCy transition parser and has therefore moved to spaCy in v4.

๐Ÿ‘ฅ Contributors

@adrianeboyd, @danieldk, @honnibal, @ines, @kadarakos, @shadeMe, @svlandeg

v8.2.3

3 months ago

๐Ÿ”ด Bug fixes

  • Make strings2arrays work again for sequences of inequal length (#918).
  • Fix cupy.cublas import (#921).

๐Ÿ‘ฅ Contributors

@danieldk, @honnibal, @ines, @svlandeg

v8.2.2

4 months ago

โœจ New features and improvements

Add the ParametricAttention_v2 layer, which adds support for key transformations (#913).

๐Ÿ‘ฅ Contributors

@danieldk, @honnibal, @ines, @svlandeg

v8.2.1

7 months ago

โœจ New features and improvements

Updates and binary wheels for Python 3.12.

๐Ÿ‘ฅ Contributors

@adrianeboyd, @honnibal, @ines, @svlandeg

v8.2.0

9 months ago

โœจ New features and improvements

To improve loading times and reduce conflicts, MXNet and TensorFlow are no longer imported automatically (#890).

โš ๏ธ Backwards incompatibilities

MXNet and TensorFlow support needs to be enabled explicitly. Previously, MXNet and TensorFlow were imported automatically if they were available in the current environment.

To enable MXNet:

from thinc.api import enable_mxnet
enable_mxnet()

To enable TensorFlow:

from thinc.api import enable_tensorflow
enable_tensorflow()

With spaCy CLI commands you can provide this custom code using -c code.py. For training use spacy train -c code.py and to package your code with your pipeline use spacy package -c code.py.

Future deprecation warning: built-in MXNet and TensorFlow support will be removed in Thinc v9. If you need MXNet or TensorFlow support in the future, you can transition to using a custom copy of the current MXNetWrapper or TensorFlowWrapper in your package or project.

๐Ÿ‘ฅ Contributors

@adrianeboyd, @danieldk, @honnibal, @ines, @svlandeg

v8.1.12

9 months ago

๐Ÿ”ด Bug fixes

  • Support zero-length batches and hidden sizes in reduce_{max,mean,sum} (#882).
  • Preserve values with dtype for NumpyOps/CupyOps.asarray (#897).

๐Ÿ‘ฅ Contributors

@adrianeboyd, @danieldk, @honnibal, @ines, @svlandeg

v8.1.11

9 months ago

โœจ New features and improvements

  • Update NumPy build constraints for NumPy v1.25 (#885).
  • Switch from distutils to setuptools/sysconfig (#888).
  • Allow Pydantic v2 using transitional v1 support (#891).

๐Ÿ“– Documentation and examples

  • Fix typo in example code (#879).

๐Ÿ‘ฅ Contributors

@adrianeboyd, @Ankush-Chander, @danieldk, @honnibal, @ines, @svlandeg

v8.1.10

1 year ago

โœจ New features and improvements

  • Implement pad as a CUDA kernel (#860).
  • Avoid h2d - d2h roundtrip when using unflatten (#861).
  • Improve exception when CuPy/PyTorch MPS is not installed (#863).
  • Lazily load custom cupy kernels (#870).

๐Ÿ”ด Bug fixes

  • Initially load TorchScript models on CPU for MPS devices (#864).

๐Ÿ‘ฅ Contributors

@adrianeboyd, @danieldk, @honnibal, @ines, @shadeMe, @svlandeg

v8.1.9

1 year ago

๐Ÿ”ด Bug fixes

  • Fix type signature of Model.begin_update (#858).

๐Ÿ‘ฅ Contributors

@danieldk, @honnibal, @ines

v8.1.8

1 year ago

โœจ New features and improvements

  • Add premap_ids.v1 layer for mapping from ints to ints (#815).
  • Update to mypy 1.0.x (#848).

๐Ÿ”ด Bug fixes

  • Make resizable layer work with textcat and transformers (#820).

๐Ÿ“– Documentation

  • Update website including Dockerfile (#843, #844, #845).

๐Ÿ‘ฅ Contributors

@adrianeboyd, @danieldk, @essenmitsosse, @honnibal, @ines, @kadarakos, @patjouk, @polm, @svlandeg