Domain Generalization Deep Graph Transformation
- URL: http://arxiv.org/abs/2305.11389v2
- Date: Tue, 23 May 2023 20:42:08 GMT
- Title: Domain Generalization Deep Graph Transformation
- Authors: Shiyu Wang, Guangji Bai, Qingyang Zhu, Zhaohui Qin, Liang Zhao
- Abstract summary: Graph transformation that predicts graph transition from one mode to another is an important and common problem.
We propose a multi-input, multi-output, hypernetwork-based graph neural network (MultiHyperGNN) that employs a encoder and a decoder to encode topologies of both input and output modes.
Comprehensive experiments show that MultiHyperGNN has a superior performance than competing models in both prediction and domain tasks.
- Score: 5.456279425545284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph transformation that predicts graph transition from one mode to another
is an important and common problem. Despite much progress in developing
advanced graph transformation techniques in recent years, the fundamental
assumption typically required in machine-learning models that the testing and
training data preserve the same distribution does not always hold. As a result,
domain generalization graph transformation that predicts graphs not available
in the training data is under-explored, with multiple key challenges to be
addressed including (1) the extreme space complexity when training on all
input-output mode combinations, (2) difference of graph topologies between the
input and the output modes, and (3) how to generalize the model to (unseen)
target domains that are not in the training data. To fill the gap, we propose a
multi-input, multi-output, hypernetwork-based graph neural network
(MultiHyperGNN) that employs a encoder and a decoder to encode topologies of
both input and output modes and semi-supervised link prediction to enhance the
graph transformation task. Instead of training on all mode combinations,
MultiHyperGNN preserves a constant space complexity with the encoder and the
decoder produced by two novel hypernetworks. Comprehensive experiments show
that MultiHyperGNN has a superior performance than competing models in both
prediction and domain generalization tasks.
Related papers
- Pre-trained Graphformer-based Ranking at Web-scale Search (Extended Abstract) [56.55728466130238]
We introduce the novel MPGraf model, which aims to integrate the regression capabilities of Transformers with the link prediction strengths of GNNs.
We conduct extensive offline and online experiments to rigorously evaluate the performance of MPGraf.
arXiv Detail & Related papers (2024-09-25T03:33:47Z) - GraphFM: A Scalable Framework for Multi-Graph Pretraining [2.882104808886318]
We introduce a scalable multi-graph multi-task pretraining approach specifically tailored for node classification tasks across diverse graph datasets from different domains.
We demonstrate the efficacy of our approach by training a model on 152 different graph datasets comprising over 7.4 million nodes and 189 million edges.
Our results show that pretraining on a diverse array of real and synthetic graphs improves the model's adaptability and stability, while performing competitively with state-of-the-art specialist models.
arXiv Detail & Related papers (2024-07-16T16:51:43Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node
Classification [6.977634174845066]
A universal feature encoder for both graph and hypergraph representation learning is designed, called UniG-Encoder.
The architecture starts with a forward transformation of the topological relationships of connected nodes into edge or hyperedge features.
The encoded node embeddings are then derived from the reversed transformation, described by the transpose of the projection matrix.
arXiv Detail & Related papers (2023-08-03T09:32:50Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Finding Diverse and Predictable Subgraphs for Graph Domain
Generalization [88.32356432272356]
This paper focuses on out-of-distribution generalization on graphs where performance drops due to the unseen distribution shift.
We propose a new graph domain generalization framework, dubbed as DPS, by constructing multiple populations from the source domains.
Experiments on both node-level and graph-level benchmarks shows that the proposed DPS achieves impressive performance for various graph domain generalization tasks.
arXiv Detail & Related papers (2022-06-19T07:57:56Z) - Efficient Variational Graph Autoencoders for Unsupervised Cross-domain
Prerequisite Chains [3.358838755118655]
We introduce Domain-versaational Variational Graph Autoencoders (DAVGAE) to solve this cross-domain prerequisite chain learning task efficiently.
Our novel model consists of a variational graph autoencoder (VGAE) and a domain discriminator.
Results show that our model outperforms recent graph-based computation using only 1/10 graph scale and 1/3 time.
arXiv Detail & Related papers (2021-09-17T19:07:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.