Fine-Tuning Graph Neural Networks via Graph Topology induced Optimal
Transport
- URL: http://arxiv.org/abs/2203.10453v1
- Date: Sun, 20 Mar 2022 04:41:17 GMT
- Title: Fine-Tuning Graph Neural Networks via Graph Topology induced Optimal
Transport
- Authors: Jiying Zhang, Xi Xiao, Long-Kai Huang, Yu Rong and Yatao Bian
- Abstract summary: GTOT-Tuning is required to utilize the property of graph data to enhance the preservation of representation produced by fine-tuned networks.
By using the adjacency relationship amongst nodes, the GTOT regularizer achieves node-level optimal transport procedures.
We evaluate GTOT-Tuning on eight downstream tasks with various GNN backbones and demonstrate that it achieves state-of-the-art fine-tuning performance for GNNs.
- Score: 28.679909084727594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the pretrain-finetuning paradigm has attracted tons of attention in
graph learning community due to its power of alleviating the lack of labels
problem in many real-world applications. Current studies use existing
techniques, such as weight constraint, representation constraint, which are
derived from images or text data, to transfer the invariant knowledge from the
pre-train stage to fine-tuning stage. However, these methods failed to preserve
invariances from graph structure and Graph Neural Network (GNN) style models.
In this paper, we present a novel optimal transport-based fine-tuning framework
called GTOT-Tuning, namely, Graph Topology induced Optimal Transport
fine-Tuning, for GNN style backbones. GTOT-Tuning is required to utilize the
property of graph data to enhance the preservation of representation produced
by fine-tuned networks. Toward this goal, we formulate graph local knowledge
transfer as an Optimal Transport (OT) problem with a structural prior and
construct the GTOT regularizer to constrain the fine-tuned model behaviors. By
using the adjacency relationship amongst nodes, the GTOT regularizer achieves
node-level optimal transport procedures and reduces redundant transport
procedures, resulting in efficient knowledge transfer from the pre-trained
models. We evaluate GTOT-Tuning on eight downstream tasks with various GNN
backbones and demonstrate that it achieves state-of-the-art fine-tuning
performance for GNNs.
Related papers
- Make Graph Neural Networks Great Again: A Generic Integration Paradigm of Topology-Free Patterns for Traffic Speed Prediction [29.096421050684516]
We propose a generic model for enabling the current GNN-based methods to preserve topology-free patterns.
Specifically, we first develop a Dual Cross-Scale Transformer (DCST) architecture, including a Spatial Transformer and a Temporal Transformer, to preserve the cross-scale topology-free patterns.
We then propose a distillation-style learning framework, in which the existing GNN-based methods are considered as the teacher model, and the proposed DCST architecture is considered as the student model.
arXiv Detail & Related papers (2024-06-24T07:32:58Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Tensor-view Topological Graph Neural Network [16.433092191206534]
Graph neural networks (GNNs) have recently gained growing attention in graph learning.
Existing GNNs only use local information from a very limited neighborhood around each node.
We propose a novel Topological Graph Neural Network (TTG-NN), a class of simple yet effective deep learning.
Real data experiments show that the proposed TTG-NN outperforms 20 state-of-the-art methods on various graph benchmarks.
arXiv Detail & Related papers (2024-01-22T14:55:01Z) - Fine-tuning Graph Neural Networks by Preserving Graph Generative
Patterns [13.378277755978258]
We show that the structural divergence between pre-training and downstream graphs significantly limits the transferability when using the vanilla fine-tuning strategy.
We propose G-Tuning to preserve the generative patterns of downstream graphs.
G-Tuning demonstrates an average improvement of 0.5% and 2.6% on in-domain and out-of-domain transfer learning experiments.
arXiv Detail & Related papers (2023-12-21T05:17:10Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Bayesian Spatio-Temporal Graph Convolutional Network for Traffic
Forecasting [22.277878492878475]
We propose a Bayesian S-temporal ConTemporal Graphal Network (BSTGCN) for traffic prediction.
The graph structure in our network is learned from the physical topology of the road network and traffic data in an end-to-end manner.
We verify the effectiveness of our method on two real-world datasets, and the experimental results demonstrate that BSTGCN attains superior performance compared with state-of-the-art methods.
arXiv Detail & Related papers (2020-10-15T03:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.