Search to Fine-tune Pre-trained Graph Neural Networks for Graph-level
Tasks
- URL: http://arxiv.org/abs/2308.06960v2
- Date: Fri, 1 Mar 2024 11:52:33 GMT
- Title: Search to Fine-tune Pre-trained Graph Neural Networks for Graph-level
Tasks
- Authors: Zhili Wang, Shimin Di, Lei Chen, Xiaofang Zhou
- Abstract summary: Graph neural networks (GNNs) have shown their unprecedented success in many graph-related tasks.
Recent efforts try to pre-train GNNs on a large-scale unlabeled graph and adapt the knowledge from the unlabeled graph to the target downstream task.
Despite the importance of fine-tuning, current GNNs pre-training works often ignore designing a good fine-tuning strategy.
- Score: 22.446655655309854
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, graph neural networks (GNNs) have shown its unprecedented success
in many graph-related tasks. However, GNNs face the label scarcity issue as
other neural networks do. Thus, recent efforts try to pre-train GNNs on a
large-scale unlabeled graph and adapt the knowledge from the unlabeled graph to
the target downstream task. The adaptation is generally achieved by fine-tuning
the pre-trained GNNs with a limited number of labeled data. Despite the
importance of fine-tuning, current GNNs pre-training works often ignore
designing a good fine-tuning strategy to better leverage transferred knowledge
and improve the performance on downstream tasks. Only few works start to
investigate a better fine-tuning strategy for pre-trained GNNs. But their
designs either have strong assumptions or overlook the data-aware issue for
various downstream datasets. Therefore, we aim to design a better fine-tuning
strategy for pre-trained GNNs to improve the model performance in this paper.
Given a pre-trained GNN, we propose to search to fine-tune pre-trained graph
neural networks for graph-level tasks (S2PGNN), which adaptively design a
suitable fine-tuning framework for the given labeled data on the downstream
task. To ensure the improvement brought by searching fine-tuning strategy, we
carefully summarize a proper search space of fine-tuning framework that is
suitable for GNNs. The empirical studies show that S2PGNN can be implemented on
the top of 10 famous pre-trained GNNs and consistently improve their
performance. Besides, S2PGNN achieves better performance than existing
fine-tuning strategies within and outside the GNN area. Our code is publicly
available at \url{https://anonymous.4open.science/r/code_icde2024-A9CB/}.
Related papers
- GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - Graph Random Neural Network for Semi-Supervised Learning on Graphs [36.218650686748546]
We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored.
Most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce.
In this paper, we propose a simple yet effective framework -- GRAPH R NEURAL NETWORKS (GRAND) -- to address these issues.
arXiv Detail & Related papers (2020-05-22T09:40:13Z) - Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs [20.197085398581397]
Graph neural networks (GNNs) have received much attention recently because of their excellent performance on graph-based tasks.
We propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models.
SEG consistently improves the performance of well-known GNN models such as GCN, GAT and SGC across different datasets.
arXiv Detail & Related papers (2020-02-18T12:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.