Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
arithmetics.py
(by @LScheib)ht.array
constructor respects implicit torch device when copy is set to false (by @JuanPedroGHM)@ClaudiaComito, @FOsterfeld, @JuanPedroGHM, @LScheib, @mrfh92, @mtar, @samadpls
ht.regression.Lasso()
on GPU (by @mrfh92)ht.diff
for 1-element-axis edge case (by @mtar)ht.allclose
on ht.inv
operations for all torch versions (by @ClaudiaComito)pre-commit
configuration with main
branch (by @ClaudiaComito)@ClaudiaComito, @JuanPedroGHM, @bhagemeier, @mrfh92 and @mtar
This release includes many important updates (see below). We particularly would like to thank our enthusiastic GSoC2022 / tentative GSoC2023 contributors @Mystic-Slice @neosunhan @Sai-Suraj-27 @shahpratham @AsRaNi1 @Ishaan-Chandak ๐๐ผ Thank you so much!
sparse
module: Distributed Compressed Sparse Row Matrix (by @Mystic-Slice)heat.reshape()
speed-up (by @ClaudiaComito)heat.pow()
speed-up when exponent is int
(by @ClaudiaComito @coquelin77 )heat.array()
default to copy=None
(e.g., only if necessary) (by @ClaudiaComito @neosunhan )DNDarray.__array__()
method for interoperability with numpy
, xarray
(by @ClaudiaComito)ht.array()
default to copy=None
(e.g., only if necessary) (by @ClaudiaComito)broadcast_arrays
, broadcast_to
(by @neosunhan)keepdim
kwarg to keepdims
(by @neosunhan)broadcast_arrays
, broadcast_to
(by @neosunhan)heat.pow()
speed-up when exponent is int
(by @ClaudiaComito)sparse
module (by @Mystic-Slice)ht.sparse.sparse_csr_matrix
(by @Ishaan-Chandak)random
module (by @Mystic-Slice)average()
along tuple of axes: shape of weights
to match shape of input (by @Mystic-Slice)@AsRaNi1, @ClaudiaComito, @Ishaan-Chandak, @JuanPedroGHM, @Mystic-Slice, @Sai-Suraj-27, @bhagemeier, @coquelin77, @mrfh92, @mtar, @neosunhan, @shahpratham
linalg.solver.lanczos
: Support double precision, complex data types (by @ClaudiaComito)ht.array
, closed loophole allowing DNDarray
construction with incompatible shapes of local arrays (by @Mystic-Slice)linalg.solver.lanczos
: Support double precision, complex data types (by @ClaudiaComito)@ClaudiaComito, @JuanPedroGHM, @Mystic-Slice, @coquelin77, @mtar, @github-actions, @github-actions[bot]
signal
module and memory-distributed 1-D convolution with ht.convolve()
ht.save_csv()
.linalg
, manipulations
modules.randint
accept ints for 'size' by @mtar in https://github.com/helmholtz-analytics/heat/pull/916
out
and where
args for ht.div
by @neosunhan in https://github.com/helmholtz-analytics/heat/pull/945
Full Changelog: https://github.com/helmholtz-analytics/heat/compare/v1.1.0...v1.2.0
torchvision
version range to match supported pytorch
version range.For other details, see the CHANGELOG.
Heat v1.0 comes with some major updates:
nn
for data-parallel neural networksWe're pinning PyTorch to version 1.6 after having run into problems with the recently released 1.7. This is a temporary solution!
Also, bug fixes: