Self-Supervised Graph Structure Refinement for Graph Neural Networks
- URL: http://arxiv.org/abs/2211.06545v1
- Date: Sat, 12 Nov 2022 02:01:46 GMT
- Title: Self-Supervised Graph Structure Refinement for Graph Neural Networks
- Authors: Jianan Zhao, Qianlong Wen, Mingxuan Ju, Chuxu Zhang, Yanfang Ye
- Abstract summary: Graph structure learning (GSL) aims to learn the adjacency matrix for graph neural networks (GNNs)
Most existing GSL works apply a joint learning framework where the estimated adjacency matrix and GNN parameters are optimized for downstream tasks.
We propose a graph structure refinement (GSR) framework with a pretrain-finetune pipeline.
- Score: 31.924317784535155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph structure learning (GSL), which aims to learn the adjacency matrix for
graph neural networks (GNNs), has shown great potential in boosting the
performance of GNNs. Most existing GSL works apply a joint learning framework
where the estimated adjacency matrix and GNN parameters are optimized for
downstream tasks. However, as GSL is essentially a link prediction task, whose
goal may largely differ from the goal of the downstream task. The inconsistency
of these two goals limits the GSL methods to learn the potential optimal graph
structure. Moreover, the joint learning framework suffers from scalability
issues in terms of time and space during the process of estimation and
optimization of the adjacency matrix. To mitigate these issues, we propose a
graph structure refinement (GSR) framework with a pretrain-finetune pipeline.
Specifically, The pre-training phase aims to comprehensively estimate the
underlying graph structure by a multi-view contrastive learning framework with
both intra- and inter-view link prediction tasks. Then, the graph structure is
refined by adding and removing edges according to the edge probabilities
estimated by the pre-trained model. Finally, the fine-tuning GNN is initialized
by the pre-trained model and optimized toward downstream tasks. With the
refined graph structure remaining static in the fine-tuning space, GSR avoids
estimating and optimizing graph structure in the fine-tuning phase which enjoys
great scalability and efficiency. Moreover, the fine-tuning GNN is boosted by
both migrating knowledge and refining graphs. Extensive experiments are
conducted to evaluate the effectiveness (best performance on six benchmark
datasets), efficiency, and scalability (13.8x faster using 32.8% GPU memory
compared to the best GSL baseline on Cora) of the proposed model.
Related papers
- Improving the interpretability of GNN predictions through conformal-based graph sparsification [9.550589670316523]
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in solving graph classification tasks.
We propose a GNN emphtraining approach that finds the most predictive subgraph by removing edges and/or nodes.
We rely on reinforcement learning to solve the resulting bi-level optimization with a reward function based on conformal predictions.
arXiv Detail & Related papers (2024-04-18T17:34:47Z) - Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness [80.87683145376305]
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
arXiv Detail & Related papers (2024-02-02T09:10:35Z) - Semantic Graph Neural Network with Multi-measure Learning for
Semi-supervised Classification [5.000404730573809]
Graph Neural Networks (GNNs) have attracted increasing attention in recent years.
Recent studies have shown that GNNs are vulnerable to the complex underlying structure of the graph.
We propose a novel framework for semi-supervised classification.
arXiv Detail & Related papers (2022-12-04T06:17:11Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - GPN: A Joint Structural Learning Framework for Graph Neural Networks [36.38529113603987]
We propose a GNN-based joint learning framework that simultaneously learns the graph structure and the downstream task.
Our method is the first GNN-based bilevel optimization framework for resolving this task.
arXiv Detail & Related papers (2022-05-12T09:06:04Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.