ProG Versions Save

All in One: Multi-task Prompting for Graph Neural Networks, KDD 2023.

v0.2

2 months ago

Big News! We are so happy to announce that we have finished most updating works from ProG to ProG++!

From v0.2, ProG means ProG++

🌟ProG++🌟: A Unified Python Library for Graph Prompting

ProG++ is an extended library with ProG, which supports more graph prompt models. Some implemented models are as follows:

[All in One] X. Sun, H. Cheng, J. Li, B. Liu, and J. Guan, “All in One: Multi-Task Prompting for Graph Neural Networks,” KDD, 2023 [GPF Plus] T. Fang, Y. Zhang, Y. Yang, C. Wang, and L. Chen, “Universal Prompt Tuning for Graph Neural Networks,” NeurIPS, 2023. [GraphPrompt] Liu Z, Yu X, Fang Y, et al. Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. The Web Conference, 2023. [GPPT] M. Sun, K. Zhou, X. He, Y. Wang, and X. Wang, “GPPT: Graph Pre-Training and Prompt Tuning to Generalize Graph Neural Networks,” KDD, 2022 [GPF] T. Fang, Y. Zhang, Y. Yang, and C. Wang, “Prompt tuning for graph neural networks,” arXiv preprint, 2022.

prog_v0.1

2 months ago

ProG (Prompt Graph) is a library built upon PyTorch to easily conduct single or multi-task prompting for pre-trained Graph Neural Networks (GNNs). The idea is derived from the paper: Xiangguo Sun, Hong Cheng, Jia Li, etc. All in One: Multi-task Prompting for Graph Neural Networks. KDD2023 (🔥 Best Research Paper Award, which is the first time for Hong Kong and Mainland China), in which they released their raw codes. This repository is a redesigned and enhanced version of the raw codes with extremely huge changes and updates

v0.1.5

8 months ago

support GPU device

v0.1.4

9 months ago
  1. support batch training and testing in meta_demo.py
  2. replace class Pipeline in ProG.prompt with new class FrontAndHead so that the maml.clone can be more memory-saving.
  3. partially support GPU device (although still not tested, this version is very near to the target)

v0.1.3

9 months ago
  1. totally replace sklearn.metrics with more advanced torchmetrics in ProG.eva and update in no_meta_demo.py, which can better support batch computing for f1 score and other metrics.

  2. implement batch training and testing for prompt_w_o_h in no_meta_demo.py

  3. try to implement GPU support (but not tested and not fully finished)

v0.1.2

9 months ago

implemented acc_f1_over_batches in ProG.eva for the function prompt_w_h() in no_meta_demo.py

latest

10 months ago

Big Update!

Compared with the raw project released in the paper (code), 0.1.1 has EXTREMELY HUGE CHANGES, including but not limited to:

  • totally rewrite the whole project, the changed code takes up >80% of the original version.
  • extremely simplify the code
  • totally changed the project structures, class names, and new function designs.
  • adopt torchmetrics for automatic accumulation over batches in the evaluation stage (e.g. Acc, F1 etc)
  • (In Progress) gradually remove sklearn.metrics in the original version
  • more clear prompt module. In the raw project, there are more than three different implementations for prompt, which are very messy. Here, we remove all these ugly codes and unify them with a LightPrompt and a HeavyPrompt.
  • support batch training and testing in function: meta_test_adam
  • and More

Explore this version to find more surprising things!

Evaluated results from this version:

Multi-class node classification (100-shots)

                      |      CiteSeer     |
                      |  ACC  | Macro-F1  |
==========================================|
reported in the paper | 80.50 |   80.05   |
(Prompt)              |                   |
------------------------------------------|
this version code     | 81.00 |   --      |
(Prompt)              |   (run one time)  | 
==========================================|
reported in the paper | 80.00 |  80.05   |
(Prompt w/o h)        |                   |
------------------------------------------|
this version code     | 79.78 |  80.01   |
(Prompt w/o h)        |   (run one time)  |
==========================================|
--: hasn't implemented batch F1 in this version

Future TODO List

  • remove our self-implemented MAML module, replace it with a third-party meta library such as learn2learn or Torchmeta
  • support sparse training
  • support GPU
  • support True Batch computing
  • support GIN and more GNNs
  • support more Pre-train methods such as GraphGPT
  • test on large-scale
  • support distributed computing
  • support more tasks and data sets

Full Changelog: https://github.com/sheldonresearch/ProG/commits/latest

stable

10 months ago

This is the polished version of the raw code in the paper. (raw code)