Helmholtz Analytics Heat Versions Save

Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python

v1.4.0

1 month ago

Changes

Documentation

  • #1406 New tutorials for interactive parallel mode for both HPC and local usage (by @ClaudiaComito)

๐Ÿ”ฅ Features

  • #1288 Batch-parallel K-means and K-medians (by @mrfh92)
  • #1228 Introduce in-place-operators for arithmetics.py (by @LScheib)
  • #1218 Distributed Fast Fourier Transforms (by @ClaudiaComito)

Bug fixes

  • #1363 ht.array constructor respects implicit torch device when copy is set to false (by @JuanPedroGHM)
  • #1216 Avoid unnecessary gathering of distributed operand (by @samadpls)
  • #1329 Refactoring of QR: stabilized Gram-Schmidt for split=1 and TS-QR for split=0 (by @mrfh92)

Interoperability

  • #1418 and #1290: Support PyTorch 2.2.2 (by @mtar)
  • #1315 and #1337: Fix some NumPy deprecations in the core and statistics tests (by @FOsterfeld)

Contributors

@ClaudiaComito, @FOsterfeld, @JuanPedroGHM, @LScheib, @mrfh92, @mtar, @samadpls

v1.3.1

5 months ago

Bug fixes

  • #1259 Bug-fix for ht.regression.Lasso() on GPU (by @mrfh92)
  • #1201 Fix ht.diff for 1-element-axis edge case (by @mtar)

Changes

Interoperability

  • #1257 Docker release 1.3.x update (by @JuanPedroGHM)

Maintenance

  • #1274 Update version before release (by @ClaudiaComito)
  • #1267 Unit tests: Increase tolerance for ht.allclose on ht.inv operations for all torch versions (by @ClaudiaComito)
  • #1266 Sync pre-commit configuration with main branch (by @ClaudiaComito)
  • #1264 Fix Pytorch release tracking workflows (by @mtar)
  • #1234 Update sphinx package requirements (by @mtar)
  • #1187 Create configuration file for Read the Docs (by @mtar)

Contributors

@ClaudiaComito, @JuanPedroGHM, @bhagemeier, @mrfh92 and @mtar

v1.3.0

10 months ago

This release includes many important updates (see below). We particularly would like to thank our enthusiastic GSoC2022 / tentative GSoC2023 contributors @Mystic-Slice @neosunhan @Sai-Suraj-27 @shahpratham @AsRaNi1 @Ishaan-Chandak ๐Ÿ™๐Ÿผ Thank you so much!

Highlights

  • #1155 Support PyTorch 2.0.1 (by @ClaudiaComito)
  • #1152 Support AMD GPUs (by @mtar)
  • #1126 Distributed hierarchical SVD (by @mrfh92)
  • #1028 Introducing the sparse module: Distributed Compressed Sparse Row Matrix (by @Mystic-Slice)
  • Performance improvements:
    • #1125 distributed heat.reshape() speed-up (by @ClaudiaComito)
    • #1141 heat.pow() speed-up when exponent is int (by @ClaudiaComito @coquelin77 )
    • #1119 heat.array() default to copy=None (e.g., only if necessary) (by @ClaudiaComito @neosunhan )
  • #970 Dockerfile and accompanying documentation (by @bhagemeier)

Changelog

Array-API compliance / Interoperability

  • #1154 Introduce DNDarray.__array__() method for interoperability with numpy, xarray (by @ClaudiaComito)
  • #1147 Adopt NEP29, drop support for PyTorch 1.7, Python 3.6 (by @mtar)
  • #1119 ht.array() default to copy=None (e.g., only if necessary) (by @ClaudiaComito)
  • #1020 Implement broadcast_arrays, broadcast_to (by @neosunhan)
  • #1008 API: Rename keepdim kwarg to keepdims (by @neosunhan)
  • #788 Interface for DPPY interoperability (by @coquelin77 @fschlimb )

New Features

  • #1126 Distributed hierarchical SVD (by @mrfh92)
  • #1020 Implement broadcast_arrays, broadcast_to (by @neosunhan)
  • #983 Signal processing: fully distributed 1D convolution (by @shahpratham)
  • #1063 add eq to Device (by @mtar)

Bug Fixes

  • #1141 heat.pow() speed-up when exponent is int (by @ClaudiaComito)
  • #1136 Fixed PyTorch version check in sparse module (by @Mystic-Slice)
  • #1098 Validates number of dimensions in input to ht.sparse.sparse_csr_matrix (by @Ishaan-Chandak)
  • #1095 Convolve with distributed kernel on multiple GPUs (by @shahpratham)
  • #1094 Fix division precision error in random module (by @Mystic-Slice)
  • #1075 Fixed initialization of DNDarrays communicator in some routines (by @AsRaNi1)
  • #1066 Verify input object type and layout + Supporting tests (by @Mystic-Slice)
  • #1037 Distributed weighted average() along tuple of axes: shape of weights to match shape of input (by @Mystic-Slice)

Benchmarking

  • #1137 Continous Benchmarking of runtime (by @JuanPedroGHM)

Documentation

  • #1150 Refactoring for efficiency and readability (by @Sai-Suraj-27)
  • #1130 Reintroduce Quick Start (by @ClaudiaComito)
  • #1079 A better README file (by @Sai-Suraj-27)

Linear Algebra

Contributors

@AsRaNi1, @ClaudiaComito, @Ishaan-Chandak, @JuanPedroGHM, @Mystic-Slice, @Sai-Suraj-27, @bhagemeier, @coquelin77, @mrfh92, @mtar, @neosunhan, @shahpratham

v1.2.2

1 year ago

Changes

Communication

  • #1058 Fix edge-case contiguity mismatch for Allgatherv (by @ClaudiaComito)

Contributors

@ClaudiaComito, @JuanPedroGHM

v1.2.1

1 year ago

Changes

  • #1048 Support PyTorch 1.13.0 on branch release/1.2.x (by @github-actions)

๐Ÿ› Bug Fixes

  • #1038 Lanczos decomposition linalg.solver.lanczos: Support double precision, complex data types (by @ClaudiaComito)
  • #1034 ht.array, closed loophole allowing DNDarray construction with incompatible shapes of local arrays (by @Mystic-Slice)

Linear Algebra

  • #1038 Lanczos decomposition linalg.solver.lanczos: Support double precision, complex data types (by @ClaudiaComito)

๐Ÿงช Testing

  • #1025 mirror repository on gitlab + ci (by @mtar)
  • #1014 fix: set cuda rng state on gpu tests for test_random.py (by @JuanPedroGHM)

Contributors

@ClaudiaComito, @JuanPedroGHM, @Mystic-Slice, @coquelin77, @mtar, @github-actions, @github-actions[bot]

v1.2.0

2 years ago

Highlights

  • We have been selected as a mentoring organization for Google Summer of Code, and we already have many new contributors (see below). Thank you!
  • Heat now supports PyTorch 1.11
  • Gearing up to support data-intensive signal processing: introduced signal module and memory-distributed 1-D convolution with ht.convolve()
  • Parallel I/O: you can now parallelize writing out to CSV file with ht.save_csv().
  • Introduced more flexibility in memory-distributed binary operations.
  • Expanded functionalities in linalg, manipulations modules.

What's Changed

New Contributors

Full Changelog: https://github.com/helmholtz-analytics/heat/compare/v1.1.0...v1.2.0

v1.1.1

2 years ago

v1.1.1

  • #864 Dependencies: constrain torchvision version range to match supported pytorch version range.

v1.1.0

2 years ago

Highlights

  • Slicing/indexing overhaul for a more NumPy-like user experience. Special thanks to Ben Bourgart @ben-bou and the TerrSysMP group for this one. Warning for distributed arrays: breaking change! Indexing one element along the distribution axis now implies the indexed element is communicated to all processes.
  • More flexibility in handling non-load-balanced distributed arrays.
  • More distributed operations, incl. meshgrid.

For other details, see the CHANGELOG.

v1.0.0

3 years ago

Release Notes

Heat v1.0 comes with some major updates:

  • new module nn for data-parallel neural networks
  • Distributed Asynchronous and Selective Optimization (DASO) to accelerate network training on multi-GPU architectures
  • support for complex numbers
  • major documentation overhaul
  • support channel on StackOverflow
  • support PyTorch 1.8
  • stop supporting Python 3.6
  • many more updates and bug fixes, check out the CHANGELOG

v0.5.1

3 years ago

We're pinning PyTorch to version 1.6 after having run into problems with the recently released 1.7. This is a temporary solution!

Also, bug fixes:

  • #678 Bug fix: Internal functions now use explicit device parameters for DNDarray and torch.Tensor initializations.
  • #684 Bug fix: distributed reshape now works on booleans as well.