Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment
- URL: http://arxiv.org/abs/2502.02017v1
- Date: Tue, 04 Feb 2025 05:09:23 GMT
- Title: Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment
- Authors: Shuo Wang, Bokui Wang, Zhixiang Shen, Boyan Deng, Zhao Kang,
- Abstract summary: Real-world graphs are often sparse and prone to noisy connections and adversarial attacks.
We propose the Multi-Domain Graph Foundation Model (MDGFM), a unified framework that aligns and leverages cross-domain topological information.
By aligning topologies, MDGFM not only improves multi-domain pre-training but also enables robust knowledge transfer to unseen domains.
- Score: 9.215549756572976
- License:
- Abstract: Recent advances in CV and NLP have inspired researchers to develop general-purpose graph foundation models through pre-training across diverse domains. However, a fundamental challenge arises from the substantial differences in graph topologies across domains. Additionally, real-world graphs are often sparse and prone to noisy connections and adversarial attacks. To address these issues, we propose the Multi-Domain Graph Foundation Model (MDGFM), a unified framework that aligns and leverages cross-domain topological information to facilitate robust knowledge transfer. MDGFM bridges different domains by adaptively balancing features and topology while refining original graphs to eliminate noise and align topological structures. To further enhance knowledge transfer, we introduce an efficient prompt-tuning approach. By aligning topologies, MDGFM not only improves multi-domain pre-training but also enables robust knowledge transfer to unseen domains. Theoretical analyses provide guarantees of MDGFM's effectiveness and domain generalization capabilities. Extensive experiments on both homophilic and heterophilic graph datasets validate the robustness and efficacy of our method.
Related papers
- Gradual Domain Adaptation for Graph Learning [13.143891794601162]
We present a graph gradual domain adaptation (GGDA) framework with the construction of a compact domain sequence.
Our approach starts with an efficient generation of knowledge-preserving intermediate graphs over the Fused Gromov-Wasserstein (FGW) metric.
Our framework concretizes the intractable inter-domain distance $W_p(mu_t,mu_t+1)$ via implementable upper and lower bounds.
arXiv Detail & Related papers (2025-01-29T06:48:59Z) - One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs [61.9759512646523]
Graph Neural Networks (GNNs) have emerged as a powerful tool to capture intricate network patterns.
Existing GNNs require careful domain-specific architecture designs and training from scratch on each dataset.
We propose a novel cross-domain pretraining framework, "one model for one graph"
arXiv Detail & Related papers (2024-11-30T01:49:45Z) - Adaptive Coordinators and Prompts on Heterogeneous Graphs for Cross-Domain Recommendations [31.05975545409408]
We develop HAGO, a framework to integrate multi-domain graphs into a cohesive structure.
We also develop a universal multi-domain graph pre-training strategy.
Our solutions outperform state-of-the-art methods in multi-domain recommendation scenarios.
arXiv Detail & Related papers (2024-10-15T15:50:53Z) - SPA: A Graph Spectral Alignment Perspective for Domain Adaptation [41.89873161315133]
Unsupervised domain adaptation (UDA) is a pivotal form in machine learning to extend the in-domain model to the distinctive target domains where the data distributions differ.
Most prior works focus on capturing the inter-domain transferability but largely overlook rich intra-domain structures, which empirically results in even worse discriminability.
We introduce a novel graph SPectral Alignment (SPA) framework to tackle the tradeoff.
arXiv Detail & Related papers (2023-10-26T17:13:48Z) - Augmenting Knowledge Transfer across Graphs [16.50013525404218]
We present TRANSNET, a generic learning framework for augmenting knowledge transfer across graphs.
In particular, we introduce a novel notion named trinity signal that can naturally formulate various graph signals at different granularity.
We show that TRANSNET outperforms all existing approaches on seven benchmark datasets by a significant margin.
arXiv Detail & Related papers (2022-12-09T08:46:02Z) - Relation Matters: Foreground-aware Graph-based Relational Reasoning for
Domain Adaptive Object Detection [81.07378219410182]
We propose a new and general framework for DomainD, named Foreground-aware Graph-based Reasoning (FGRR)
FGRR incorporates graph structures into the detection pipeline to explicitly model the intra- and inter-domain foreground object relations.
Empirical results demonstrate that the proposed FGRR exceeds the state-of-the-art on four DomainD benchmarks.
arXiv Detail & Related papers (2022-06-06T05:12:48Z) - Domain Adaptation by Topology Regularization [0.0]
Domain adaptation (DA) or transfer learning (TL) enables algorithms to transfer knowledge from a labelled (source) data set to an unlabelled but related (target) data set of interest.
We propose to leverage global data structure by applying a topological data analysis technique called persistent homology to TL.
arXiv Detail & Related papers (2021-01-28T16:45:41Z) - Cross-Domain Facial Expression Recognition: A Unified Evaluation
Benchmark and Adversarial Graph Learning [85.6386289476598]
We develop a novel adversarial graph representation adaptation (AGRA) framework for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair evaluations on several popular benchmarks and show that the proposed AGRA framework outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T15:00:31Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.