Hyper optimized contraction trees for large tensor networks and einsums
Bug fixes
tree.peak_size
more accurate, by taking max assuming left, right and parent present at the same time.Enhancements
path_simulated_annealing.py
), based on "Multi-Tensor Contraction for XEB Verification of Quantum Circuits" by Gleb Kalachev, Pavel Panteleev, Man-Hong Yung (arXiv:2108.05665), and the "treesa" implementation in OMEinsumContractionOrders.jl by Jin-Guo Liu and Pan Zhang. This can be accessed most easily by supplying opt = HyperOptimizer(simulated_annealing_opts={})
.ContractionTree.plot_flat
: a new method for plotting the contraction tree as a flat diagram showing all indices on
every intermediate (without requiring any graph layouts), which is useful for visualizing and understanding small contractions.
HyperGraph.plot
: support showing hyper outer indices, multi-edges, and automatic unique coloring of nodes and indices (to match plot_flat
).ContractionTree.restore_ind
for 'unslicing' or 'unprojecting' previously removed indices.ContractionTree.from_path
: add option complete
to automatically complete the tree given an incomplete path (usually disconnected subgraphs - #29).ContractionTree.get_incomplete_nodes
for finding all uncontracted childless-parentless node groups.ContractionTree.autocomplete
for automatically completing a contraction tree, using above method.tree.plot_flat
: show any preprocessing steps and optionally list sliced indicesautojit="auto"
for contractions, which by default turns on jit for backend="jax"
only.tree.describe
for a various levels of information about a tree, e.g. tree.describe("full")
and tree.describe("concise")
.get_default_objective
method to return the objective function they were optimized with, for simpler further refinement or scoring, where it is now picked up automatically.'greedy'
rather than 'auto'
. This might make individual trials slightly worse but makes each cheaper, see discussion: #27.Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.6...v0.6.0
Bug fixes
Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.5...v0.5.6
Enhancements
HyperOptimizer
: by default simply warn if an individual trial fails, rather than raising an exception. This is to ensure rare failures do not spoil an entire optimization run. The behavior can be controlled with the on_trial_error
argument.Bug fixes
Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.4...v0.5.5
Bug fixes
auto
and auto-hq
optimizers are now safe to run under multi-threading.Features
einsum
, einsum_tree
and einsum_expression
: add support for all numpy input formats, including interleaved indices and ellipses.Bug fixes
opt_einsum
dependence (via a PathOptimizer
method)Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.2...v0.5.3
array_contract_path
Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.1...v0.5.2
einsum
einsum_tree
einsum_expression
array_contract
array_contract_tree
array_contract_expression
AutoOptimizer
AutoHQOptimizer
numpy
, opt_einsum
)tree.plot_contractions
Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.4.0...v0.5.0
cotengra
versions of 'greedy', 'optimal', 'auto', 'auto-hq'cotengrust
integration for fast greedy/optimal subtree reconfigurationFull Changelog: https://github.com/jcmgray/cotengra/compare/v0.3.2...v0.4.0
optimize_greedy
Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.3.1...v0.3.2