Python package built to ease deep learning on graph, on top of existing DL frameworks.
We're thrilled to announce the release of DGL 2.1.0. ๐๐๐
GraphBolt
is now available. Thanks @mfbalin for the extraordinary effort. See the updated examples.GraphBolt
to PyG
data format and train with PyG
models seamlessly: examples.NegativeSampler
for seeds. by @yxy235 in https://github.com/dmlc/dgl/pull/7068
Thanks for all your contributions. @drivanov @frozenbugs @LourensT @Skeleton003 @mfbalin @RamonZhou @Rhett-Ying @wkmyws @jasonlin316 @caojy1998 @czkkkkkk @hutiechuan @peizhou001 @rudongyu @xiangyuzhi @yxy235
We're thrilled to announce the release of DGL 2.0.0, a major milestone in our mission to empower developers with cutting-edge tools for Graph Neural Networks (GNNs). ๐๐๐
In this release, we introduce a brand new package: dgl.graphbolt, which is a revolutionary data loading framework that supercharges your GNN training/inference by streamlining the data pipeline. Please refer to the documentation page for GraphBolt's overview and end2end notebooks. More end2end examples are available in github code base.
googletest
to v1.14.0 (#6273)--num_workers
input parameter to the EEG_GCNN example. (#6467)IsPinned
in the coo/csr constructor from every sampling process (#6568)Windows packages are not available and will be ready soon.
DGL 2.0.0 has been achieved through the dedicated efforts of the DGL team and the invaluable contributions of our external collaborators.
@9rum @AndreaPrati98 @BarclayII @HernandoR @OlegPlatonov @RamonZhou @Rhett-Ying @SinuoXu @Skeleton003 @TristonC @anko-intel @ayushnoori @caojy1998 @chang-l @czkkkkkk @daniil-sizov @drivanov @frozenbugs @hmacdope @isratnisa @jermainewang @keli-wen @mfbalin @ndbaker1 @paoxiaode @peizhou001 @rudongyu @songqing @willarliss @xiangyuzhi @yaox12 @yxy235 @zheng-da
Your collective efforts have been key to the success of this release. We deeply appreciate every contribution, large and small, as they collectively shape and improve DGL. Thank you all for your dedication and hard work!
2.1.0
, 2.1.1
(except windows) and the supported versions are 1.13.0
, 1.13.1
, 2.0.0
, 2.0.1
, 2.1.0
, 2.1.1
.12.1
and the supported versions are 11.6
, 11.7
, 11.8
, 12.1
.2.1.0
, 2.1.1
are blocked due to a compiling issue. This will be supported as soon as the issue is resolved.1.12.0
, 1.12.1
are deprecated and the supported versions are 1.13.0
, 1.13.1
, 2.0.0
, 2.0.1
.10.2
, 11.3
are deprecated and the supported versions are 11.6
, 11.7
. 11.8
.17
.2.0.1
.dgl.khop_daj()
and so on.xbyak
, tvm
.SparseMatrix class
g.adj(self, etype=None, eweight_name=None)
returns the sparse matrix representation of the DGL graph g
on the edge type etype
and edge weight eweight_name
. (#5372)dgl.sparse.to_torch_sparse_coo/csr/csc
and dgl.sparse.from_torch_sparse
. (#5373)SparseMatrix operators
A * B
. (#5368)A / B
. (#5369)dgl.sparse.broadcast_add/sub/mul/div
. (#5370)SparseMatrix examples
Speedup the CPU to_block function in graph sampling. (#5305, @peizhou001 )
.adj()
function of DGLGraph produces a SparseMatrix, the original .adj(self, transpose=False, ctx=F.cpu(), scipy_fmt=None, etype=None)
is renamed as .adj_external
, returning the sparse format from external packages such as Scipy and Pytorch. (#5372)v1.0.0 release is a new milestone for DGL. ๐๐๐
In this release, we introduced a brand new package: dgl.sparse, which allows DGL users to build GNNs in Sparse Matrix paradigm. We provided Google Colab tutorials on dgl.sparse package from getting started on sparse APIs to building different types of GNN models including Graph Diffusion, Hypergraph and Graph Transformer, and 10+ examples of commonly used models in github code base.
NOTE: this feature is currently only available in Linux.
is_unibipartite
(#4556)Starting from this release, we will drop support for CUDA 10.1 and 11.0. On windows, we will further drop support for CUDA 10.2.
Linux: CentOS 7+ / Ubuntu 18.04+
PyTorch ver. \ CUDA ver. | 10.2 | 11.3 | 11.6 | 11.7 |
---|---|---|---|---|
1.12 | โ | โ | โ | ย |
1.13 | ย | ย | โ | โ |
Windows: Windows 10+/Windows server 2016+
PyTorch ver. \ CUDA ver. | 11.3 | 11.6 | 11.7 |
---|---|---|---|
1.12 | โ | โ | ย |
1.13 | ย | โ | โ |
The installation URL and conda repository has changed for CUDA packages. Please use the following:
# If you installed dgl-cuXX pip wheel or dgl-cudaXX.X conda package, please uninstall them first.
pip install dgl -f https://data.dgl.ai/wheels/repo.html # for CPU
pip install dgl -f https://data.dgl.ai/wheels/cuXX/repo.html # for CUDA, XX = 102, 113, 116 or 117
conda install dgl -c dglteam # for CPU
conda install dgl -c dglteam/label/cuXX # for CUDA, XX = 102, 113, 116 or 117
v0.9.1 is a minor release with the following update:
DGL now supports partitioning and preprocessing graph data using multiple machines. At its core is a new data format called Chunked Graph Data Format (CGDF) which stores graph data by chunks. The new pipeline processes data chunks in parallel which not only reduces the memory requirement of each machine but also significantly accelerates the entire procedure. For the same random graph with 1B nodes/5B edges, using a cluster of 8 AWS EC2 x1e.4xlarge (16 vCPU, 488GB RAM each), the new pipeline can reduce the running time to 2.7 hours and cut down the money cost by 3.7x. Read the feature highlight blog for more details.
To get started with this new feature, check out the new user guide chapter.
dgl/examples/pytorch/multigpu/
. With a new example of multi-GPU graph property prediction that can achieve 9.5x speedup on 8 GPUs. (#4385)dgl.use_libxsmm
and dgl.is_libxsmm_enabled
to enable/disable Intel LibXSMM. (#4455)exclude_self
to exclude self-loop edges for dgl.knn_graph
. The API now supports creating a batch of KNN graphs. (#4389)AsyncTransferer
class. The functionality has been incorporated to DGL DataLoader. (#4505)num_servers
and num_workers
arguments of dgl.distributed.initialize
. (#4284)Starting from this release, we will drop support for CUDA 10.1 and 11.0. On windows, we will further drop support for CUDA 10.2.
Linux: CentOS 7+ / Ubuntu 18.04+
PyTorch ver. \ CUDA ver. | 10.2 | 11.1 | 11.3 | 11.5 | 11.6 |
---|---|---|---|---|---|
1.9 | โ | โ | ย | ย | ย |
1.10 | โ | โ | โ | ย | ย |
1.11 | โ | โ | โ | โ | ย |
1.12 | โ | ย | โ | ย | โ |
Windows: Windows 10+/Windows server 2016+
PyTorch ver. \ CUDA ver. | 11.1 | 11.3 | 11.5 | 11.6 |
---|---|---|---|---|
1.9 | โ | ย | ย | ย |
1.10 | โ | โ | ย | ย |
1.11 | โ | โ | โ | ย |
1.12 | ย | โ | ย | โ |
num_bases
in RelGraphConv module (#4321)