GraphControl: Adding Conditional Control to Universal Graph Pre-trained
Models for Graph Domain Transfer Learning
- URL: http://arxiv.org/abs/2310.07365v3
- Date: Mon, 11 Mar 2024 07:33:51 GMT
- Title: GraphControl: Adding Conditional Control to Universal Graph Pre-trained
Models for Graph Domain Transfer Learning
- Authors: Yun Zhu, Yaoke Wang, Haizhou Shi, Zhenshuo Zhang, Dian Jiao, Siliang
Tang
- Abstract summary: Graph self-supervised algorithms have achieved significant success in acquiring generic knowledge from abundant unlabeled graph data.
Different graphs, even across seemingly similar domains, can differ significantly in terms of attribute semantics.
We introduce an innovative deployment module coined as GraphControl, motivated by ControlNet, to realize better graph domain transfer learning.
- Score: 28.04023419006392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph-structured data is ubiquitous in the world which models complex
relationships between objects, enabling various Web applications. Daily
influxes of unlabeled graph data on the Web offer immense potential for these
applications. Graph self-supervised algorithms have achieved significant
success in acquiring generic knowledge from abundant unlabeled graph data.
These pre-trained models can be applied to various downstream Web applications,
saving training time and improving downstream (target) performance. However,
different graphs, even across seemingly similar domains, can differ
significantly in terms of attribute semantics, posing difficulties, if not
infeasibility, for transferring the pre-trained models to downstream tasks.
Concretely speaking, for example, the additional task-specific node information
in downstream tasks (specificity) is usually deliberately omitted so that the
pre-trained representation (transferability) can be leveraged. The trade-off as
such is termed as "transferability-specificity dilemma" in this work. To
address this challenge, we introduce an innovative deployment module coined as
GraphControl, motivated by ControlNet, to realize better graph domain transfer
learning. Specifically, by leveraging universal structural pre-trained models
and GraphControl, we align the input space across various graphs and
incorporate unique characteristics of target data as conditional inputs. These
conditions will be progressively integrated into the model during fine-tuning
or prompt tuning through ControlNet, facilitating personalized deployment.
Extensive experiments show that our method significantly enhances the
adaptability of pre-trained models on target attributed datasets, achieving
1.4-3x performance gain. Furthermore, it outperforms training-from-scratch
methods on target data with a comparable margin and exhibits faster
convergence.
Related papers
- Towards Graph Prompt Learning: A Survey and Beyond [38.55555996765227]
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability.
This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications.
arXiv Detail & Related papers (2024-08-26T06:36:42Z) - AnyGraph: Graph Foundation Model in the Wild [16.313146933922752]
Graph foundation models offer the potential to learn robust, generalizable representations from graph data.
In this work, we investigate a unified graph model, AnyGraph, designed to handle key challenges.
Our experiments on diverse 38 graph datasets have demonstrated the strong zero-shot learning performance of AnyGraph.
arXiv Detail & Related papers (2024-08-20T09:57:13Z) - GraphFM: A Scalable Framework for Multi-Graph Pretraining [2.882104808886318]
We introduce a scalable multi-graph multi-task pretraining approach specifically tailored for node classification tasks across diverse graph datasets from different domains.
We demonstrate the efficacy of our approach by training a model on 152 different graph datasets comprising over 7.4 million nodes and 189 million edges.
Our results show that pretraining on a diverse array of real and synthetic graphs improves the model's adaptability and stability, while performing competitively with state-of-the-art specialist models.
arXiv Detail & Related papers (2024-07-16T16:51:43Z) - UniGraph: Learning a Unified Cross-Domain Foundation Model for Text-Attributed Graphs [30.635472655668078]
Text-Attributed Graphs (TAGs) can generalize to unseen graphs and tasks across diverse domains.
We propose a novel cascaded architecture of Language Models (LMs) and Graph Neural Networks (GNNs) as backbone networks.
We demonstrate the model's effectiveness in self-supervised representation learning on unseen graphs, few-shot in-context transfer, and zero-shot transfer.
arXiv Detail & Related papers (2024-02-21T09:06:31Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - Addressing the Impact of Localized Training Data in Graph Neural
Networks [0.0]
Graph Neural Networks (GNNs) have achieved notable success in learning from graph-structured data.
This article aims to assess the impact of training GNNs on localized subsets of the graph.
We propose a regularization method to minimize distributional discrepancies between localized training data and graph inference.
arXiv Detail & Related papers (2023-07-24T11:04:22Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.