PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations
Fixed a minor bug that caused torch-sparse
to crash in PyTorch 1.7.0.
dim=0
torch-sparse
cat
can now concatenate a list of SparseTensor
s diagonally by passing dim=(0,1)
to the function call.matmul
via @overload
options
arguments with device
and dtype
argumentsAdded neighborhood sampling functionality via sample
and sample_adj
.
spspmm
on the CPUsparse_reshape
functionalitybandwidth
utilitiesThis release introduces random walk and GraphSAINT subgraph functionalities via random_walk
and saint_subgraph
to the SparseTensor
class.
This release introduces the partition
function based on the METIS library which re-orders the entries of a SparseTensor
according to a computed partition. Note that the METIS library needs to be installed and WITH_METIS=1
needs to be set in order to make use of this function.
This release includes a major rewriting of torch-sparse
by introducing the SparseTensor
class, which is fully differentiable and traceable. The SparseTensor
class is still undocumented and not well-tested, and should hence be used with caution. All known functions from earlier versions still work as expected.
Support for torch-scatter=2.0
. As a result, PyTorch 1.4 is now required to install this package.