Domain Generalization Deep Graph Transformation
- URL: http://arxiv.org/abs/2305.11389v2
- Date: Tue, 23 May 2023 20:42:08 GMT
- Title: Domain Generalization Deep Graph Transformation
- Authors: Shiyu Wang, Guangji Bai, Qingyang Zhu, Zhaohui Qin, Liang Zhao
- Abstract summary: Graph transformation that predicts graph transition from one mode to another is an important and common problem.
We propose a multi-input, multi-output, hypernetwork-based graph neural network (MultiHyperGNN) that employs a encoder and a decoder to encode topologies of both input and output modes.
Comprehensive experiments show that MultiHyperGNN has a superior performance than competing models in both prediction and domain tasks.
- Score: 5.456279425545284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph transformation that predicts graph transition from one mode to another
is an important and common problem. Despite much progress in developing
advanced graph transformation techniques in recent years, the fundamental
assumption typically required in machine-learning models that the testing and
training data preserve the same distribution does not always hold. As a result,
domain generalization graph transformation that predicts graphs not available
in the training data is under-explored, with multiple key challenges to be
addressed including (1) the extreme space complexity when training on all
input-output mode combinations, (2) difference of graph topologies between the
input and the output modes, and (3) how to generalize the model to (unseen)
target domains that are not in the training data. To fill the gap, we propose a
multi-input, multi-output, hypernetwork-based graph neural network
(MultiHyperGNN) that employs a encoder and a decoder to encode topologies of
both input and output modes and semi-supervised link prediction to enhance the
graph transformation task. Instead of training on all mode combinations,
MultiHyperGNN preserves a constant space complexity with the encoder and the
decoder produced by two novel hypernetworks. Comprehensive experiments show
that MultiHyperGNN has a superior performance than competing models in both
prediction and domain generalization tasks.
Related papers
- One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs [61.9759512646523]
Graph Neural Networks (GNNs) have emerged as a powerful tool to capture intricate network patterns.
Existing GNNs require careful domain-specific architecture designs and training from scratch on each dataset.
We propose a novel cross-domain pretraining framework, "one model for one graph"
arXiv Detail & Related papers (2024-11-30T01:49:45Z) - Pre-trained Graphformer-based Ranking at Web-scale Search (Extended Abstract) [56.55728466130238]
We introduce the novel MPGraf model, which aims to integrate the regression capabilities of Transformers with the link prediction strengths of GNNs.
We conduct extensive offline and online experiments to rigorously evaluate the performance of MPGraf.
arXiv Detail & Related papers (2024-09-25T03:33:47Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - GraphGPT: Generative Pre-trained Graph Eulerian Transformer [8.675197550607358]
We introduce a novel generative pre-trained model for graph learning based on the Graph Eulerian Transformer (GET)
GraphGPT achieves performance comparable to or surpassing state-of-the-art methods on multiple large-scale Open Graph Benchmark (OGB) datasets.
Notably, generative pre-training enables scaling GraphGPT to 2 billion parameters while maintaining performance gains.
arXiv Detail & Related papers (2023-12-31T16:19:30Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Efficient Variational Graph Autoencoders for Unsupervised Cross-domain
Prerequisite Chains [3.358838755118655]
We introduce Domain-versaational Variational Graph Autoencoders (DAVGAE) to solve this cross-domain prerequisite chain learning task efficiently.
Our novel model consists of a variational graph autoencoder (VGAE) and a domain discriminator.
Results show that our model outperforms recent graph-based computation using only 1/10 graph scale and 1/3 time.
arXiv Detail & Related papers (2021-09-17T19:07:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.