Pytorch Geometric Versions Save

Graph Neural Network Library for PyTorch

2.5.3

3 weeks ago

PyG 2.5.3 includes a variety of bug fixes related to the MessagePassing refactoring.

Bug Fixes

  • Ensure backward compatibility in MessagePassing via torch.load (#9105)
  • Prevent model compilation on custom propagate functions (#9079)
  • Flush template file before closing it (#9151)
  • Do not set propagate method twice in MessagePassing for decomposed_layers > 1 (#9198)

Full Changelog: https://github.com/pyg-team/pytorch_geometric/compare/2.5.2...2.5.3

2.5.2

1 month ago

PyG 2.5.2 includes a bug fix for implementing MessagePassing layers in Google Colab.

Bug Fixes

  • Raise error in case inspect.get_source is not supported (#9068)

Full Changelog: https://github.com/pyg-team/pytorch_geometric/compare/2.5.1...2.5.2

2.5.1

2 months ago

PyG 2.5.1 includes a variety of bugfixes.

Bug Fixes

  • Ignore self.propagate appearances in comments when parsing MessagePassing implementation (#9044)
  • Fixed OSError on read-only file systems within MessagePassing (#9032)
  • Made MessagePassing interface thread-safe (#9001)
  • Fixed metaclass conflict in Dataset (#8999)
  • Fixed import errors on MessagePassing modules with nested inheritance (#8973)
  • Fix OSError when downloading datasets with simplecache (#8932)

Full Changelog: https://github.com/pyg-team/pytorch_geometric/compare/2.5.0...2.5.1

2.5.0

3 months ago

We are excited to announce the release of PyG 2.5 🎉🎉🎉

PyG 2.5 is the culmination of work from 38 contributors who have worked on features and bug-fixes for a total of over 360 commits since torch-geometric==2.4.0.

Highlights

torch_geometric.distributed

We are thrilled to announce the first in-house distributed training solution for PyG via the torch_geometric.distributed sub-package. Developers and researchers can now take full advantage of distributed training on large-scale datasets which cannot be fully loaded in memory of one machine at the same time. This implementation doesn't require any additional packages to be installed on top of the default PyG stack.

Key Advantages

  • Balanced graph partitioning via METIS ensures minimal communication overhead when sampling subgraphs across compute nodes.
  • Utilizing DDP for model training in conjunction with RPC for remote sampling and feature fetching routines (with TCP/IP protocol and gloo communication backend) allows for data parallelism with distinct data partitions at each node.
  • The implementation via custom GraphStore and FeatureStore APIs provides a flexible and tailored interface for distributing large graph structure information and feature storage.
  • Distributed neighbor sampling is capable of sampling in both local and remote partitions through RPC communication channels. All advanced functionality of single-node sampling are also applicable for distributed training, e.g., heterogeneous sampling, link-level sampling, temporal sampling, etc.
  • Distributed data loaders offer a high-level abstraction for managing sampler processes, ensuring simplicity and seamless integration with standard PyG data loaders.

See here for the accompanying tutorial. In addition, we provide two distributed examples in examples/distributed/pyg to get started:

EdgeIndex Tensor Representation

torch-geometric==2.5.0 introduces the EdgeIndex class.

EdgeIndex is a torch.Tensor, that holds an edge_index representation of shape [2, num_edges]. Edges are given as pairwise source and destination node indices in sparse COO format. While EdgeIndex sub-classes a general torch.Tensor, it can hold additional (meta)data, i.e.:

  • sparse_size: The underlying sparse matrix size
  • sort_order: The sort order (if present), either by row or column
  • is_undirected: Whether edges are bidirectional.

Additionally, EdgeIndex caches data for fast CSR or CSC conversion in case its representation is sorted (i.e. its rowptr or colptr). Caches are filled based on demand (e.g., when calling EdgeIndex.sort_by()), or when explicitly requested via EdgeIndex.fill_cache_(), and are maintained and adjusted over its lifespan (e.g., when calling EdgeIndex.flip()).

from torch_geometric import EdgeIndex

edge_index = EdgeIndex(
    [[0, 1, 1, 2],
     [1, 0, 2, 1]]
    sparse_size=(3, 3),
    sort_order='row',
    is_undirected=True,
    device='cpu',
)
>>> EdgeIndex([[0, 1, 1, 2],
...            [1, 0, 2, 1]])
assert edge_index.is_sorted_by_row
assert edge_index.is_undirected

# Flipping order:
edge_index = edge_index.flip(0)
>>> EdgeIndex([[1, 0, 2, 1],
...            [0, 1, 1, 2]])
assert edge_index.is_sorted_by_col
assert edge_index.is_undirected

# Filtering:
mask = torch.tensor([True, True, True, False])
edge_index = edge_index[:, mask]
>>> EdgeIndex([[1, 0, 2],
...            [0, 1, 1]])
assert edge_index.is_sorted_by_col
assert not edge_index.is_undirected

# Sparse-Dense Matrix Multiplication:
out = edge_index.flip(0) @ torch.randn(3, 16)
assert out.size() == (3, 16)

EdgeIndex is implemented through extending torch.Tensor via the __torch_function__ interface (see here for the highly recommended tutorial).

EdgeIndex ensures for optimal computation in GNN message passing schemes, while preserving the ease-of-use of regular COO-based PyG workflows. EdgeIndex will fully deprecate the usage of SparseTensor from torch-sparse in later releases, leaving us with just a single source of truth for representing graph structure information in PyG.

RecSys Support

Previously, all/most of our link prediction models were trained and evaluated using binary classification metrics. However, this usually requires that we have a set of candidates in advance, from which we can then infer the existence of links. This is not necessarily practical, since in most cases, we want to find the top-k most likely links from the full set of O(N^2) pairs.

torch-geometric==2.5.0 brings full support for using GNNs as a recommender system (#8452), including support for

mips = MIPSKNNIndex(dst_emb)

for src_batch in src_loader:
    src_emb = model(src_batch.x_dict, src_batch.edge_index_dict)
    _, pred_index_mat = mips.search(src_emb, k)
    
    for metric in retrieval_metrics:
         metric.update(pred_index_mat, edge_label_index)
         
for metric in retrieval_metrics:
     metric.compute()

See here for the accompanying example.

PyTorch 2.2 Support

PyG 2.5 is fully compatible with PyTorch 2.2 (#8857), and supports the following combinations:

PyTorch 2.2 cpu cu118 cu121
Linux
macOS
Windows

You can still install PyG 2.5 with an older PyTorch release up to PyTorch 1.12 in case you are not eager to update your PyTorch version.

Native torch.compile(...) and TorchScript Support

torch-geometric==2.5.0 introduces a full re-implementation of the MessagePassing interface, which makes it natively applicable to both torch.compile and TorchScript. As such, torch_geometric.compile is now fully deprecated in favor of torch.compile

- model = torch_geometric.compile(model)
+ model = torch.compile(model)

and MessagePassing.jittable() is now a no-op:

- conv = torch.jit.script(conv.jittable())
+ model = torch.jit.script(conv)

In addition, torch.compile usage has been fixed to not require disabling of extension packages such as torch-scatter or torch-sparse.

New Tutorials, Examples, Models and Improvements

Breaking Changes

  • GATConv now initializes modules differently depending on whether their input is bipartite or non-bipartite (#8397). This will lead to issues when loading model state for GATConv layers trained on earlier PyG versions.

Deprecations

Features

Package-wide Improvements

  • Added support for type checking via mypy (#8254)
  • Added fsspec as file system backend (#8379, #8426, #8434, #8474)
  • Added fallback code path for segment-based reductions in case torch-scatter is not installed (#8852)

Temporal Graph Support

torch_geometric.datasets

torch_geometric.nn

torch_geometric.metrics

torch_geometric.explain

torch_geometric.transforms

Other Improvements

  • Added support for returning multi graphs in utils.to_networkx (#8575)
  • Added noise scheduler utilities utils.noise_scheduler.{get_smld_sigma_schedule,get_diffusion_beta_schedule} for diffusion-based graph generative models (#8347)
  • Added a relabel node functionality to utils.dropout_node via relabel_nodes: bool argument (#8524)
  • Added support for weighted utils.cross_entropy.sparse_cross_entropy (#8340)
  • Added support for profiling on XPU device via profile.profileit("xpu") (#8532)
  • Added METIS partitioning with CSC/CSR format selection in ClusterData (#8438)

Bugfixes

Changes

New Contributors

Full Changelog: https://github.com/pyg-team/pytorch_geometric/compare/2.4.0...2.5.0

2.4.0

7 months ago

We are excited to announce the release of PyG 2.4 🎉🎉🎉

PyG 2.4 is the culmination of work from 62 contributors who have worked on features and bug-fixes for a total of over 500 commits since torch-geometric==2.3.1.

Highlights

PyTorch 2.1 and torch.compile(dynamic=True) support

The long wait has an end! With the release of PyTorch 2.1, PyG 2.4 now brings full support for torch.compile to graphs of varying size via the dynamic=True option, which is especially useful for use-cases that involve the usage of DataLoader or NeighborLoader. Examples and tutorials have been updated to reflect this support accordingly (#8134), and models and layers in torch_geometric.nn have been tested to produce zero graph breaks:

import torch_geometric

model = torch_geometric.compile(model, dynamic=True)

When enabling the dynamic=True option, PyTorch will up-front attempt to generate a kernel that is as dynamic as possible to avoid recompilations when sizes change across mini-batches changes. As such, you should only ever not specify dynamic=True when graph sizes are guaranteed to never change. Note that dynamic=True requires PyTorch >= 2.1.0 to be installed.

PyG 2.4 is fully compatible with PyTorch 2.1, and supports the following combinations:

PyTorch 2.1 cpu cu118 cu121
Linux
macOS
Windows

You can still install PyG 2.4 on older PyTorch releases up to PyTorch 1.11 in case you are not eager to update your PyTorch version.

OnDiskDataset Interface

We added the OnDiskDataset base class for creating large graph datasets (e.g., molecular databases with billions of graphs), which do not easily fit into CPU memory at once (#8028, #8044, #8046, #8051, #8052, #8054, #8057, #8058, #8066, #8088, #8092, #8106). OnDiskDataset leverages our newly introduced Database backend (sqlite3 by default) for on-disk storage and access of graphs, supports DataLoader out-of-the-box, and is optimized for maximum performance.

OnDiskDataset utilizes a user-specified schema to store data as efficient as possible (instead of Python pickling). The schema can take int, float str, object or a dictionary with dtype and size keys (for specifying tensor data) as input, and can be nested as a dictionary. For example,

dataset = OnDiskDataset(root, schema={
    'x': dict(dtype=torch.float, size=(-1, 16)),
    'edge_index': dict(dtype=torch.long, size=(2, -1)),
    'y': float,
})

creates a database with three columns, where x and edge_index are stored as binary data, and y is stored as a float.

Afterwards, you can append data to the OnDiskDataset and retrieve data from it via dataset.append()/dataset.extend(), and dataset.get()/dataset.multi_get(), respectively. We added a fully working example on how to set up your own OnDiskDataset here (#8102). You can also convert in-memory dataset instances to an OnDiskDataset instance by running InMemoryDataset.to_on_disk_dataset() (#8116).

Neighbor Sampling Improvements

Hierarchical Sampling

One drawback of NeighborLoader is that it computes a representations for all sampled nodes at all depths of the network. However, nodes sampled in later hops no longer contribute to the node representations of seed nodes in later GNN layers, thus performing useless computation. NeighborLoader will be marginally slower since we are computing node embeddings for nodes we no longer need. This is a trade-off we have made to obtain a clean, modular and experimental-friendly GNN design, which does not tie the definition of the model to its utilized data loader routine.

With PyG 2.4, we introduced the option to eliminate this overhead and speed-up training and inference in mini-batch GNNs further, which we call "Hierarchical Neighborhood Sampling" (see here for the full tutorial) (#6661, #7089, #7244, #7425, #7594, #7942). Its main idea is to progressively trim the adjacency matrix of the returned subgraph before inputting it to each GNN layer, and works seamlessly across several models, both in the homogeneous and heterogeneous graph setting. To support this trimming and implement it effectively, the NeighborLoader implementation in PyG and in pyg-lib additionally return the number of nodes and edges sampled in each hop, which are then used on a per-layer basis to trim the adjacency matrix and the various feature matrices to only maintain the required amount (see the trim_to_layer method):

class GNN(torch.nn.Module):
    def __init__(self, in_channels: int, out_channels: int, num_layers: int):
        super().__init__()

        self.convs = ModuleList([SAGEConv(in_channels, 64)])
        for _ in range(num_layers - 1):
            self.convs.append(SAGEConv(hidden_channels, hidden_channels))
        self.lin = Linear(hidden_channels, out_channels)

    def forward(
        self,
        x: Tensor,
        edge_index: Tensor,
        num_sampled_nodes_per_hop: List[int],
        num_sampled_edges_per_hop: List[int],
    ) -> Tensor:

        for i, conv in enumerate(self.convs):
            # Trim edge and node information to the current layer `i`.
            x, edge_index, _ = trim_to_layer(
                i, num_sampled_nodes_per_hop, num_sampled_edges_per_hop,
                x, edge_index)

            x = conv(x, edge_index).relu()

        return self.lin(x)

Corresponding examples can be found here and here.

Biased Sampling

Additionally, we added support for weighted/biased sampling in NeighborLoader/LinkNeighborLoader scenarios. For this, simply specify your edge_weight attribute during NeighborLoader initialization, and PyG will pick up these weights to perform weighted/biased sampling (#8038):

data = Data(num_nodes=5, edge_index=edge_index, edge_weight=edge_weight)

loader = NeighborLoader(
    data,
    num_neighbors=[10, 10],
    weight_attr='edge_weight',
)

batch = next(iter(loader))

New models, datasets, examples & tutorials

As part of our algorithm and documentation sprints (#7892), we have added:

Join our Slack here if you're interested in joining community sprints in the future!

Breaking Changes

  • Data.keys() is now a method instead of a property (#7629):
    2.4
    
    data = Data(x=x, edge_index=edge_index)
    print(data.keys)
    # ['x', 'edge_index']
    
    
    data = Data(x=x, edge_index=edge_index)
    print(data.keys())
    # ['x', 'edge_index']
    
  • Dropped Python 3.7 support (#7939)
  • Removed FastHGTConv in favor of HGTConv (#7117)
  • Removed the layer_type argument from GraphMaskExplainer (#7445)
  • Renamed dest argument to dst in utils.geodesic_distance (#7708)

Deprecations

Features

Data and HeteroData improvements

Data-loading improvements

  • Added support for floating-point slicing in Datasete.g.dataset[:0.9] (#7915)
  • Added save and load methods to InMemoryDataset (#7250, #7413)
  • Beta: Added IBMBNodeLoader and IBMBBatchLoader data loaders (#6230)
  • Beta: Added HyperGraphData to support hypergraphs (#7611)
  • Added CachedLoader (#7896, #7897)
  • Allowed GPU tensors as input to NodeLoader and LinkLoader (#7572)
  • Added PrefetchLoader capabilities (#7376, #7378, #7383)
  • Added manual sampling interface to NodeLoader and LinkLoader (#7197)

Better support for sparse tensors

  • Added SparseTensor support to WLConvContinuousGeneralConvPDNConv and ARMAConv (#8013)
  • Change torch_sparse.SparseTensor logic to utilize torch.sparse_csr instead (#7041)
  • Added support for torch.sparse.Tensor in DataLoader (#7252)
  • Added support for torch.jit.script within MessagePassing layers without torch_sparse being installed (#7061, #7062)
  • Added unbatching logic for torch.sparse.Tensor (#7037)
  • Added support for Data.num_edges for native torch.sparse.Tensor adjacency matrices (#7104)
  • Accelerated sparse tensor conversion routines (#7042, #7043)
  • Added a sparse cross_entropy implementation (#7447, #7466)

Integration with 3rd-party libraries

  • Added FlopsCount support via fvcore (#7693)
  • Added to_dgl and from_dgl conversion functions (#7053)

torch_geometric.transforms

  • All transforms are now immutable, i.e. they perform a shallow-copy of the data and therefore do not longer modify data in-place (#7429)
  • Added the HalfHop graph upsampling augmentation (#7827)
  • Added interval argument to Cartesian, LocalCartesian and Distance transformations (#7533, #7614, #7700)
  • Added an optional add_pad_mask argument to the Pad transform (#7339)
  • Added NodePropertySplit transformation for creating node-level splits using structural node properties (#6894)
  • Added a AddRemainingSelfLoops transformation (#7192)

Bugfixes

  • Fixed HeteroConv for layers that have a non-default argument order, e.g., GCN2Conv (#8166)
  • Handle reserved keywords as keys in ModuleDict and ParameterDict (#8163)
  • Fixed DynamicBatchSampler.__len__ to raise an error in case num_steps is undefined (#8137)
  • Enabled pickling of DimeNet models (#8019)
  • Fixed a bug in which batch.e_id was not correctly computed on unsorted graph inputs (#7953)
  • Fixed from_networkx conversion from nx.stochastic_block_model graphs (#7941)
  • Fixed the usage of bias_initializer in HeteroLinear (#7923)
  • Fixed broken URLs in HGBDataset (#7907)
  • Fixed an issue where SetTransformerAggregation produced NaN values for isolates nodes (#7902)
  • Fixed summary on modules with uninitialized parameters (#7884)
  • Fixed tracing of add_self_loops for a dynamic number of nodes (#7330)
  • Fixed device issue in PNAConv.get_degree_histogram (#7830)
  • Fixed the shape of edge_label_time when using temporal sampling on homogeneous graphs (#7807)
  • Fixed edge_label_index computation in LinkNeighborLoader for the homogeneous+disjoint mode (#7791)
  • Fixed CaptumExplainer for binary classification tasks (#7787)
  • Raise error when collecting non-existing attributes in HeteroData (#7714)
  • Fixed get_mesh_laplacian for normalization="sym" (#7544)
  • Use dim_size to initialize output size of the EquilibriumAggregation layer (#7530)
  • Fixed empty edge indices handling in SparseTensor (#7519)
  • Move the scaler tensor in GeneralConv to the correct device (#7484)
  • Fixed HeteroLinear bug when used via mixed precision (#7473)
  • Fixed gradient computation of edge weights in utils.spmm (#7428)
  • Fixed an index-out-of-range bug in QuantileAggregation when dim_size is passed (#7407)
  • Fixed a bug in LightGCN.recommendation_loss() to only use the embeddings of the nodes involved in the current mini-batch (#7384)
  • Fixed a bug in which inputs where modified in-place in to_hetero_with_bases (#7363)
  • Do not load node_default and edge_default attributes in from_networkx (#7348)
  • Fixed HGTConv utility function _construct_src_node_feat (#7194)
  • Fixed subgraph on unordered inputs (#7187)
  • Allow missing node types in HeteroDictLinear (#7185)
  • Fix numpy incompatiblity when reading files for Planetoid datasets (#7141)
  • Fixed crash of heterogeneous data loaders if node or edge types are missing (#7060, #7087)
  • Allowed CaptumExplainer to be called multiple times in a row (#7391)

Changes

Full Changelog

Full Changelog: https://github.com/pyg-team/pytorch_geometric/compare/2.3.0...2.4.0

New Contributors

2.3.1

1 year ago

PyG 2.3.1 includes a variety of bugfixes.

Bug Fixes

  • Fixed cugraph GNN layer support for pylibcugraphops==23.04 (#7023)
  • Removed DeprecationWarning of TypedStorage usage in DataLoader (#7034)
  • Fixed a bug in FastHGTConv that computed values via parameters used to compute the keys (#7050)
  • Fixed numpy incompatiblity when reading files in Planetoid datasets (#7141)
  • Fixed utils.subgraph on unordered inputs (#7187)
  • Fixed support for Data.num_edges for native torch.sparse.Tensor adjacency matrices (#7104)

Full Changelog: https://github.com/pyg-team/pytorch_geometric/compare/2.3.0...2.3.1

2.3.0

1 year ago

2.2.0

1 year ago

2.1.0

1 year ago

2.0.4

2 years ago

PyG 2.0.4 🎉

A new minor PyG version release, bringing PyTorch 1.11 support to PyG. It further includes a variety of new features and bugfixes:

Features

  • Added Quiver examples for multi-GU training using GraphSAGE (#4103), thanks to @eedalong and @luomai
  • nn.model.to_captum: Full integration of explainability methods provided by the Captum library (#3990, #4076), thanks to @RBendias
  • nn.conv.RGATConv: The relational graph attentional operator (#4031, #4110), thanks to @fork123aniket
  • nn.pool.DMoNPooling: The spectral modularity pooling operator (#4166, #4242), thanks to @fork123aniket
  • nn.*: Support for shape information in the documentation (#3739, #3889, #3893, #3946, #3981, #4009, #4120, #4158), thanks to @saiden89 and @arunppsg and @konstantinosKokos
  • loader.TemporalDataLoader: A dataloader to load a TemporalData object in mini-batches (#3985, #3988), thanks to @otaviocx
  • loader.ImbalancedSampler: A weighted random sampler that randomly samples elements according to class distribution (#4198)
  • transforms.VirtualNode: A transform that adds a virtual node to a graph (#4163)
  • transforms.LargestConnectedComponents: Selects the subgraph that corresponds to the largest connected components in the graph (#3949), thanks to @abojchevski
  • utils.homophily: Support for class-insensitive edge homophily (#3977, #4152), thanks to @hash-ir and @jinjh0123
  • utils.get_mesh_laplacian: Mesh Laplacian computation (#4187), thanks to @daniel-unyi-42

Datasets

  • Added a dataset cheatsheet to the documentation that collects import graph statistics across a variety of datasets supported in PyG (#3807, #3817) (please consider helping us filling its remaining content)
  • datasets.EllipticBitcoinDataset: A dataset of Bitcoin transactions (#3815), thanks to @shravankumar147

Minor Changes

  • nn.models.MLP: MLPs can now either be initialized via a list of channels or by specifying hidden_channels and num_layers (#3957)
  • nn.models.BasicGNN: Final Linear transformations are now always applied (except for jk=None) (#4042)
  • nn.conv.MessagePassing: Message passing modules that make use of edge_updater are now jittable (#3765), thanks to @Padarn
  • nn.conv.MessagePassing: (Official) support for min and mul aggregations (#4219)
  • nn.LightGCN: Initialize embeddings via xavier_uniform for better model performance (#4083), thanks to @nishithshowri006
  • nn.conv.ChebConv: Automatic eigenvalue approximation (#4106), thanks to @daniel-unyi-42
  • nn.conv.APPNP: Added support for optional edge_weight, (690a01d), thanks to @YueeXiang
  • nn.conv.GravNetConv: Support for torch.jit.script (#3885), thanks to @RobMcH
  • nn.pool.global_*_pool: The batch vector is now optional (#4161)
  • nn.to_hetero: Added a warning in case to_hetero is used on HeteroData metadata with unused destination node types (#3775)
  • nn.to_hetero: Support for nested modules (ea135bf)
  • nn.Sequential: Support for indexing (#3790)
  • nn.Sequential: Support for OrderedDict as input (#4075)
  • datasets.ZINC: Added an in-depth description of the task (#3832), thanks to @gasteigerjo
  • datasets.FakeDataset: Support for different feature distributions across different labels (#4065), thanks to @arunppsg
  • datasets.FakeDataset: Support for custom global attributes (#4074), thanks to @arunppsg
  • transforms.NormalizeFeatures: Features will no longer be transformed in-place (ada5b9a)
  • transforms.NormalizeFeatures: Support for negative feature values (6008e30)
  • utils.is_undirected: Improved efficiency (#3789)
  • utils.dropout_adj: Improved efficiency (#4059)
  • utils.contains_isolated_nodes: Improved efficiency (970de13)
  • utils.to_networkx: Support for to_undirected options (upper triangle vs. lower triangle) (#3901, #3948), thanks to @RemyLau
  • graphgym: Support for custom metrics and loggers (#3494), thanks to @RemyLau
  • graphgym.register: Register operations can now be used as class decorators (#3779, #3782)
  • Documentation: Added a few exercises at the end of documentation tutorials (#3780), thanks to @PabloAMC
  • Documentation: Added better installation instructions to CONTRIBUTUNG.md (#3803, #3991, #3995), thanks to @Cho-Geonwoo and @RBendias and @RodrigoVillatoro
  • Refactor: Clean-up dependencies (#3908, #4133, #4172), thanks to @adelizer
  • CI: Improved test runtimes (#4241)
  • CI: Additional linting check via yamllint (#3886)
  • CI: Additional linting check via isort (66b1780), thanks to @mananshah99
  • torch.package: Model packaging via torch.package (#3997)

Bugfixes

  • data.HeteroData: Fixed a bug in data.{attr_name}_dict in case data.{attr_name} does not exist (#3897)
  • data.Data: Fixed data.is_edge_attr in case data.num_edges == 1 (#3880)
  • data.Batch: Fixed a device mismatch bug in case a batch object was indexed that was created from GPU tensors (e6aa4c9, c549b3b)
  • data.InMemoryDataset: Fixed a bug in which copy did not respect the underlying slice (d478dcb, #4223)
  • nn.conv.MessagePassing: Fixed message passing with zero nodes/edges (#4222)
  • nn.conv.MessagePassing: Fixed bipartite message passing with flow="target_to_source" (#3907)
  • nn.conv.GeneralConv: Fixed an issue in case skip_linear=False and in_channels=out_channels (#3751), thanks to @danielegrattarola
  • nn.to_hetero: Fixed model transformation in case node type names or edge type names contain whitespaces or dashes (#3882, b63a660)
  • nn.dense.Linear: Fixed a bug in lazy initialization for PyTorch < 1.8.0 (973d17d, #4086)
  • nn.norm.LayerNorm: Fixed a bug in the shape of weights and biases (#4030), thanks to @marshka
  • nn.pool: Fixed torch.jit.script support for torch-cluster functions (#4047)
  • datasets.TOSCA: Fixed a bug in which indices of faces started at 1 rather than 0 (8c282a0), thanks to @JRowbottomGit
  • datasets.WikiCS: Fixed WikiCS to be undirected by default (#3796), thanks to @pmernyei
  • Resolved inconsistency between utils.contains_isolated_nodes and data.has_isolated_nodes (#4138)
  • graphgym: Fixed the loss function regarding multi-label classification (#4206), thanks to @RemyLau
  • Documentation: Fixed typos, grammar and bugs (#3840, #3874, #3875, #4149), thanks to @itamblyn and @chrisyeh96 and @finquick