Opt Einsum Versions Save

⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.

v2.1.3

5 years ago

Bug fixes:

  • Fixes unicode issue for large numbers of tensors in Python 2.7.
  • Fixes unicode install bug in README.md.

v2.1.2

5 years ago

Bug Fixes:

  • Ensures versioneer.py is in MANIFEST.in for a clean pip install.

v2.1.1

5 years ago

Bug Fixes:

  • Minor tweak to release procedure.

v2.1.0

5 years ago

opt_einsum continues to improve its support for additional backends beyond NumPy with PyTorch.

We have also published the opt_einsum package in the Journal of Open Source Software. If you use this package in your work, please consider citing us!

New features:

  • PyTorch backend support
  • Tensorflow eager-mode execution backend support

Enhancements:

  • Intermediate tensordot-like expressions are now ordered to avoid transposes.
  • CI now uses conda backend to better support GPU and tensor libraries.
  • Now accepts arbitrary unicode indices rather than a subset.
  • New auto path option which switches between optimal and greedy at four tensors.

Bug fixes:

  • Fixed issue where broadcast indices were incorrectly locked out of tensordot-like evaluations even after their dimension was broadcast.

v2.0.1

5 years ago

opt_einsum is a powerful tensor contraction order optimizer for NumPy and related ecosystems.

New Features

  • Allows unlimited Unicode indices.
  • Adds a Journal of Open-Source Software paper.
  • Minor documentation improvements.

v2.0.0

6 years ago

opt_einsum is a powerful tensor contraction order optimizer for NumPy and related ecosystems.

New Features

  • Expressions can be precompiled so that the expression optimization need not happen multiple times.
  • The greedy order optimization algorithm has been tuned to be able to handle hundreds of tensors in several seconds.
  • Input indices can now be unicode so that expressions can have many thousands of indices.
  • GPU and distributed computing backends have been added such as Dask, TensorFlow, CUPy, Theano, and Sparse.

Bug Fixes

  • A error effecting cases where opt_einsum mistook broadcasting operations for matrix multiply has been fixed.
  • Most error messages are now more expressive.

1.0

7 years ago

Official 1.0 release.

Einsum is a very powerful function for contracting tensors of arbitrary dimension and index. However, it is only optimized to contract two terms at a time resulting in non-optimal scaling for contractions with many terms. Opt_einsum aims to fix this by optimizing the contraction order which can lead to arbitrarily large speed ups at the cost of additional intermediate tensors.

Opt_einsum is also implemented into the np.einsum function as of NumPy v1.12.

v0.2.0

7 years ago

A large step towards to a full 1.0 release. BLAS usage is now automatically applied to all operations. Future releases will be more careful with regard to views and needless data copying.

v0.1.1

8 years ago

Adds Python 3 support in addition to installation through a setup.py command.