ROD: Reception-aware Online Distillation for Sparse Graphs
- URL: http://arxiv.org/abs/2107.11789v1
- Date: Sun, 25 Jul 2021 11:55:47 GMT
- Title: ROD: Reception-aware Online Distillation for Sparse Graphs
- Authors: Wentao Zhang, Yuezihan Jiang, Yang Li, Zeang Sheng, Yu Shen, Xupeng
Miao, Liang Wang, Zhi Yang, Bin Cui
- Abstract summary: We propose ROD, a novel reception-aware online knowledge distillation approach for sparse graph learning.
We design three supervision signals for ROD: multi-scale reception-aware graph knowledge, task-based supervision, and rich distilled knowledge.
Our approach has been extensively evaluated on 9 datasets and a variety of graph-based tasks.
- Score: 23.55530524584572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have been widely used in many graph-based tasks
such as node classification, link prediction, and node clustering. However,
GNNs gain their performance benefits mainly from performing the feature
propagation and smoothing across the edges of the graph, thus requiring
sufficient connectivity and label information for effective propagation.
Unfortunately, many real-world networks are sparse in terms of both edges and
labels, leading to sub-optimal performance of GNNs. Recent interest in this
sparse problem has focused on the self-training approach, which expands
supervised signals with pseudo labels. Nevertheless, the self-training approach
inherently cannot realize the full potential of refining the learning
performance on sparse graphs due to the unsatisfactory quality and quantity of
pseudo labels.
In this paper, we propose ROD, a novel reception-aware online knowledge
distillation approach for sparse graph learning. We design three supervision
signals for ROD: multi-scale reception-aware graph knowledge, task-based
supervision, and rich distilled knowledge, allowing online knowledge transfer
in a peer-teaching manner. To extract knowledge concealed in the multi-scale
reception fields, ROD explicitly requires individual student models to preserve
different levels of locality information. For a given task, each student would
predict based on its reception-scale knowledge, while simultaneously a strong
teacher is established on-the-fly by combining multi-scale knowledge. Our
approach has been extensively evaluated on 9 datasets and a variety of
graph-based tasks, including node classification, link prediction, and node
clustering. The result demonstrates that ROD achieves state-of-art performance
and is more robust for the graph sparsity.
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - SMARTQUERY: An Active Learning Framework for Graph Neural Networks
through Hybrid Uncertainty Reduction [25.77052028238513]
We propose a framework to learn a graph neural network with very few labeled nodes using a hybrid uncertainty reduction function.
We demonstrate the competitive performance of our method against state-of-the-arts on very few labeled data.
arXiv Detail & Related papers (2022-12-02T20:49:38Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Meta Propagation Networks for Graph Few-shot Semi-supervised Learning [39.96930762034581]
We propose a novel network architecture equipped with a novel meta-learning algorithm to solve this problem.
In essence, our framework Meta-PN infers high-quality pseudo labels on unlabeled nodes via a meta-learned label propagation strategy.
Our approach offers easy and substantial performance gains compared to existing techniques on various benchmark datasets.
arXiv Detail & Related papers (2021-12-18T00:11:56Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.