Curriculum Learning for Graph Neural Networks: Which Edges Should We
Learn First
- URL: http://arxiv.org/abs/2310.18735v1
- Date: Sat, 28 Oct 2023 15:35:34 GMT
- Title: Curriculum Learning for Graph Neural Networks: Which Edges Should We
Learn First
- Authors: Zheng Zhang, Junxiang Wang, and Liang Zhao
- Abstract summary: We propose a novel strategy to incorporate more edges into training according to their difficulty from easy to hard.
We demonstrate the strength of our proposed method in improving the generalization ability and robustness of learned representations.
- Score: 13.37867275976255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved great success in representing data
with dependencies by recursively propagating and aggregating messages along the
edges. However, edges in real-world graphs often have varying degrees of
difficulty, and some edges may even be noisy to the downstream tasks.
Therefore, existing GNNs may lead to suboptimal learned representations because
they usually treat every edge in the graph equally. On the other hand,
Curriculum Learning (CL), which mimics the human learning principle of learning
data samples in a meaningful order, has been shown to be effective in improving
the generalization ability and robustness of representation learners by
gradually proceeding from easy to more difficult samples during training.
Unfortunately, existing CL strategies are designed for independent data samples
and cannot trivially generalize to handle data dependencies. To address these
issues, we propose a novel CL strategy to gradually incorporate more edges into
training according to their difficulty from easy to hard, where the degree of
difficulty is measured by how well the edges are expected given the model
training status. We demonstrate the strength of our proposed method in
improving the generalization ability and robustness of learned representations
through extensive experiments on nine synthetic datasets and nine real-world
datasets. The code for our proposed method is available at
https://github.com/rollingstonezz/Curriculum_learning_for_GNNs.
Related papers
- Loss-aware Curriculum Learning for Heterogeneous Graph Neural Networks [30.333265803394998]
This paper investigates the application of curriculum learning techniques to improve the performance of Heterogeneous Graph Neural Networks (GNNs)
To better classify the quality of the data, we design a loss-aware training schedule, named LTS, that measures the quality of every nodes of the data.
Our findings demonstrate the efficacy of curriculum learning in enhancing HGNNs capabilities for analyzing complex graph-structured data.
arXiv Detail & Related papers (2024-02-29T05:44:41Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Certified Graph Unlearning [39.29148804411811]
Graph-structured data is ubiquitous in practice and often processed using graph neural networks (GNNs)
We introduce the first known framework for emph certified graph unlearning of GNNs.
Three different types of unlearning requests need to be considered, including node feature, edge and node unlearning.
arXiv Detail & Related papers (2022-06-18T07:41:10Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
Benchmark Study [100.27567794045045]
Training deep graph neural networks (GNNs) is notoriously hard.
We present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
arXiv Detail & Related papers (2021-08-24T05:00:37Z) - Jointly Learnable Data Augmentations for Self-Supervised GNNs [0.311537581064266]
We propose GraphSurgeon, a novel self-supervised learning method for graph representation learning.
We take advantage of the flexibility of the learnable data augmentation and introduce a new strategy that augments in the embedding space.
Our finding shows that GraphSurgeon is comparable to six SOTA semi-supervised and on par with five SOTA self-supervised baselines in node classification tasks.
arXiv Detail & Related papers (2021-08-23T21:33:12Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Scalable Graph Neural Network Training: The Case for Sampling [4.9201378771958675]
Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs.
Training them efficiently is challenging due to the irregular nature of graph data.
Two different approaches have emerged in the literature: whole-graph and sample-based training.
arXiv Detail & Related papers (2021-05-05T20:44:10Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.