Machine learning algorithms for many-body quantum systems
nk.utils.StaticRange
object to store the local values that label the local degree of freedom. This special object is jax friendly and can be converted to arrays, and allows for easy conversion from the local degrees of freedom to integers that can be used to index into arrays, and back. While those objects are not really used internally yet, in the future they will be used to simplify the implementations of operators and other objects #1732.driver.run(..., timeit=True)
to all drivers when running them.nk.models.tensor_networks
namespace. Those also replace previous tensor network implementations, that were de-facto broken #1745.netket.operator.BoseHubbardJax
and split numba implementation in a separate class #1773.jax_default_device
according to some local rank. This behaviour should allow users to not have to specify CUDA_VISIBLE_DEVICES
and local mpi ranks on their scripts. This behaviour is only activated when running using MPI, and not used when using experimental sharding mode. To disable this functionality, set NETKET_MPI_AUTODETECT_LOCAL_GPU=0
#1757.netket.experimental.models.Slater2nd
now implements also the generalized hartree fock, as well as the restricted and unrestricted HF of before #1765.nk.experimental.models.MultiSlater2nd
. This state has the same options of {class}~netket.experimental.models.Slater2nd
#1765.jax>=0.4.27
#1801.out
keyword of Discrete Hilbert indexing methods (all_states
, numbers_to_states
and states_to_numbers
) deprecated in the last release has been removed completely #1722.nk.utils.StaticRange
objects instead of list of floats. The constructors have been updated accordingly. {class}~nk.utils.StaticRange
is a range-like object that is jax-compatible and from now on should be used to index into local hilbert spaces #1732.numbers_to_states
and states_to_numbers
methods of {class}netket.hilbert.DiscreteHilbert
must now be jax jittable. Custom Hilbert spaces using non-jittable functions have to be adapted by including a {func}jax.pure_callback
in the numbers_to_states
/states_to_numbers
member functions #1748.~netket.vqs.MCState.chunk_size
must be set to an integer and will error immediately otherwise. This might break some code, but in general should give more informative error messages overall #1798.netket.nn.states_to_numbers
is now deprecated. Please use {meth}~DiscreteHilbert.numbers_to_states
directly.netket.hilbert.Fock
and netket.hilbert.Spin
in Jax and jit the init
and reset
functions of netket.sampler.MetropolisSampler
for better performance and improved compatibility with sharding #1721.netket.hilbert.index
used by HomogeneousHilbert
(including Spin
and Fock
) so that larger spaces with a sum constraint can be indexed. This can be useful for netket.sampler.Exactsampler
, netket.vqs.FullSumState
as well as for ED calculations #1720.netket.vqs.MCState
now leads to perfectly deterministic, identical samples between two different copies of the same MCState
even if the sampler is changed. Previously, duplicating an MCState
and changing the sampler on two copies of the same state would lead to some completely random seed being used and therefore different samples to be generated. This change is needed to eventually achieve proper checkpointing of our calculations #1778.netket.experimental.sampler.MetropolisPt
now accept a distribution (lin
or log
) for the distribution of the temperatures, or a custom array #1786.netket.sampler.sample_next
that was deprecated in NetKet 3.3 (December 2021) #17XX.shard_map
to avoid unnecessary collective communication when doing batched indexing of sharded arrays #1777.CUDA_VISIBLE_GPUS
when running with MPI by @PhilipVinc in https://github.com/netket/netket/pull/1757
Full Changelog: https://github.com/netket/netket/compare/v3.11.4...v3.12.0
Updates for deprecations in jax 0.4.25
Full Changelog: https://github.com/netket/netket/compare/v3.11.3...v3.11.4
Bugfix release addressing the following issues:
.H
was the identity. This could break code relying on complex-valued fermionic operators #1743.netket.utils.struct.Pytree
, where the cached properties's cache was not cleared when replace
was used to copy and modify the Pytree #1750.optax<0.3
, following the release of optax
0.2 #1751.Full Changelog: https://github.com/netket/netket/compare/v3.11.2...v3.11.3
Bugfix release to solve the following issues:
nk.sampler.rules.MultipleRules
#1729.t0
initial time if dt
was a float, as well as a wrong repr
method leading to uncomprehensible stacktraces #1736.Full Changelog: https://github.com/netket/netket/compare/v3.11.1...v3.11.2
This release supports Python 3.12 through the latest release of Numba, introduces several new jax-compatible operators and adds a new experimental way to distribute calculations among multiple GPUs without using MPI.
We have a few breaking changes as well: deprecations that were issued more than 18 months ago have now been finalized, most notable the dtype
argument to several models and layers, some keywords to GCNN and setting the number of chains of exact samplers.
nkx.models
and nkx.nn
#1305.NETKET_EXPERIMENTAL_SHARDING=1
. Parallelization is achieved by distributing the Markov chains / samples equally across all available devices utilizing jax.Array
sharding. On GPU multi-node setups are supported via jax.distribued, whereas on CPU it is limited to a single process but several threads can be used by setting XLA_FLAGS='--xla_force_host_platform_device_count=XX'
#1511.netket.experimental.operator.FermionOperator2nd
is a new Jax-compatible implementation of fermionic operators. It can also be constructed starting from a standard fermionic operator by calling operator.to_jax_operator()
, or used in combination with pyscf
converters#1675,#1684.netket.operator.LocalOperatorJax
is a new Jax-compatible implementation of local operators. It can also be constructed starting from a standard operator by calling operator.to_jax_operator()
#1654.netket.logging.AbstractLog
#1665.~netket.experimental.sampler.ParticleExchange
sampler and corresponding rule {class}~netket.experimental.sampler.rules.ParticleExchangeRule
has been added, which special cases {class}~netket.sampler.ExchangeSampler
to fermionic spaces in order to avoid proposing moves where the two site exchanged have the same population #1683.netket.models.Jastrow
wave-function now only has {math}N (N-1)
variational parameters, instead of the {math}N^2
redundant ones it had before. Saving and loading format has now changed and won't be compatible with previous versions#1664.nk.sampler
namespace (see original commit 1f77ad8267e16fe8b2b2641d1d48a0e7ae94832e)extra_bias
option of Equivariant Networks/GCNNs (see original commit c61ea542e9d0f3e899d87a7471dea96d4f6b152d)dtype=
attribute of several modules in nk.nn
and nk.models
, which has been printing an error since April 2022. You should update usages of dtype=
to param_dtype=
#1724
MetropolisSampler.n_sweeps
has been renamed to {attr}~netket.sampler.MetropolisSampler.MetropolisSampler.sweep_size
for clarity. Using n_sweeps
when constructing the sampler now throws a deprecation warning; sweep_size
should be used instead going forward #1657.netket.utils.struct.dataclass
are deprecated because the base class is now a {class}netket.utils.struct.Pytree
. The only change needed is to remove the dataclass decorator and define a standard init method #1653.out
keyword of Discrete Hilbert indexing methods (all_states
, numbers_to_states
and states_to_numbers
) is deprecated and will be removed in the next release. Plan ahead and remove usages to avoid breaking your code 3 months from now #1725!netket.utils.struct.Pytree
, can be used to create Pytrees for which inheritance autoamtically works and for which it is possible to define __init__
. Several structures such as samplers and rules have been transitioned to this new interface instead of old style @struct.dataclass
#1653.~netket.experimental.operator.FermionOperator2nd
and related classes now store the constant diagonal shift as another term instead of a completely special cased scalar value. The same operators now also respect the cutoff
keyword argument more strictly #1686.Full list of PRs merged:
np.testing
for better error messages by @wdphy16 in https://github.com/netket/netket/pull/1663
DiscreteOperator
and sparse array by @wdphy16 in https://github.com/netket/netket/pull/1661
autoflush_cost
when flushing parameters in JsonLog
by @wdphy16 in https://github.com/netket/netket/pull/1662
diag_scale
through QGTAuto
by @wdphy16 in https://github.com/netket/netket/pull/1692
variables
with parameters
in automatic QGT selector by @alleSini99 in https://github.com/netket/netket/pull/1693
to_local_operator
to return LocalOperatorJax
instead of LocalOperator
by @alleSini99 in https://github.com/netket/netket/pull/1694
x in ys
where x
is an array by @wdphy16 in https://github.com/netket/netket/pull/1701
vs.sample(chain_length=X, n_samples=X)
by @PhilipVinc in https://github.com/netket/netket/pull/1709
number_to_hilbert
by @PhilipVinc in https://github.com/netket/netket/pull/1712
Full Changelog: https://github.com/netket/netket/compare/v3.10.1...v3.11.0
Full Changelog: https://github.com/netket/netket/compare/v3.10.1...v3.10.2
Full Changelog: https://github.com/netket/netket/compare/v3.10...v3.10.1
The highlights of this version are a new experimental driver to optimise networks with millions of parameters using SR, and introduces new utility functions to convert a pyscf molecule to a netket Hamiltonian.
Read below for a more detailed changelog
netket.experimental.driver.VMC_SRt
driver, which leads in identical parameter updates as the standard Stochastic Reconfiguration with diagonal shift regularization. Therefore, it is essentially equivalent to using the standard {class}netket.driver.VMC
with the {class}netket.optimizer.SR
preconditioner. The advantage of this method is that it requires the inversion of a matrix with side number of samples instead of number of parameters, making this formulation particularly useful in typical deep learning scenarios #1623.netket.experimental.operator.from_pyscf_molecule
to construct the electronic hamiltonian of a given molecule specified through pyscf. This is accompanied by {func}netket.experimental.operator.pyscf.TV_from_pyscf_molecule
to compute the T and V tensors of a pyscf molecule #1602.NETKET_ENABLE_X64=0
, which also sets JAX_ENABLE_X64=0
. When running with this flag, the number of warnings printed by jax is considerably reduced as well #1544.netket.operator.spin.identity
and {func}netket.operator.boson.identity
#1601.netket.hilbert.Particle
constructor that only takes as input the number of dimensions of the system #1577.netket.experimental.models.Slater2nd
model implementing a Slater ansatz #1622.netket.jax.logdet_cmplx
function to compute the complex log-determinant of a batch of matrices #1622.netket.experimental.hilbert.SpinOrbitalFermions
attributes have been changed: {attr}~netket.experimental.hilbert.SpinOrbitalFermions.n_fermions
now always returns an integer with the total number of fermions in the system (if specified). A new attribute {attr}~netket.experimental.hilbert.SpinOrbitalFermions.n_fermions_per_spin
has been introduced that returns the same tuple of fermion number per spin subsector as before. A few fields are now marked as read-only as modifications where ignored #1622.netket.nn.blocks.SymmExpSum
layer is now normalised by the number of elements in the symmetry group in order to maintain a reasonable normalisation #1624.netket.experimental.operator.fermion.create
and similar operators has now changed from the eigenvalue of the spin operator ({math}\pm 1/2
and so on) to the eigenvalue of the Pauli matrices ({math}\pm 1
and so on) #1637.~netket.operator.LocalOperator
, especially in the case of large local hilbert spaces. Also leveraged sparsity in the terms to speed up compilation (_setup
) in the same cases #1558.netket.nn.blocks.SymmExpSum
now works with inputs of arbitrary dimensions, while previously it errored for all inputs that were not 2D #1616
FrozenDict
from flax
and instead return standard dictionaries for the variational parameters from the variational state. This makes it much easier to edit parameters #1547.netket.operator.LocalOperator
could not be built with np.matrix
object obtained by converting scipy sparse matrices to dense #1597.netket.experimental.operator.FermionOperator2nd
with other operators #1599.netket.jax.jacobian
by the square root of number of samples. Previously, when specifying center=True
we were incorrectly rescaling the output #1614.netket.operator.PauliStrings
that caused the dtype to get out of sync with the dtype of the internal arrays, causing errors when manipulating them symbolically #1619.netket.operator.DiscreteJaxOperator
as observables with all drivers #1625.get_conn
method was returning values as if the operator was transposed, and has now been fixed. This will break the expectation value of non-simmetric fermionic operators, but hopefully nobody was looking into them #1640.Full Changelog: https://github.com/netket/netket/compare/v3.9.1...v3.9.2