USBD: Universal Structural Basis Distillation for Source-Free Graph Domain Adaptation
- URL: http://arxiv.org/abs/2602.08431v1
- Date: Mon, 09 Feb 2026 09:39:07 GMT
- Title: USBD: Universal Structural Basis Distillation for Source-Free Graph Domain Adaptation
- Authors: Yingxu Wang, Kunyu Zhang, Mengzhu Wang, Siyang Gao, Nan Yin,
- Abstract summary: SF-GDA is pivotal for privacy-preserving knowledge transfer across graph datasets.<n>We propose the Universal Structural Basis Distillation, a framework that shifts the paradigm from adapting a biased model to learning a universal structural basis for SF-GDA.
- Score: 28.47018372381707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: SF-GDA is pivotal for privacy-preserving knowledge transfer across graph datasets. Although recent works incorporate structural information, they implicitly condition adaptation on the smoothness priors of sourcetrained GNNs, thereby limiting their generalization to structurally distinct targets. This dependency becomes a critical bottleneck under significant topological shifts, where the source model misinterprets distinct topological patterns unseen in the source domain as noise, rendering pseudo-label-based adaptation unreliable. To overcome this limitation, we propose the Universal Structural Basis Distillation, a framework that shifts the paradigm from adapting a biased model to learning a universal structural basis for SF-GDA. Instead of adapting a biased source model to a specific target, our core idea is to construct a structure-agnostic basis that proactively covers the full spectrum of potential topological patterns. Specifically, USBD employs a bi-level optimization framework to distill the source dataset into a compact structural basis. By enforcing the prototypes to span the full Dirichlet energy spectrum, the learned basis explicitly captures diverse topological motifs, ranging from low-frequency clusters to high-frequency chains, beyond those present in the source. This ensures that the learned basis creates a comprehensive structural covering capable of handling targets with disparate structures. For inference, we introduce a spectral-aware ensemble mechanism that dynamically activates the optimal prototype combination based on the spectral fingerprint of the target graph. Extensive experiments on benchmarks demonstrate that USBD significantly outperforms state-of-the-art methods, particularly in scenarios with severe structural shifts, while achieving superior computational efficiency by decoupling the adaptation cost from the target data scale.
Related papers
- SA^2GFM: Enhancing Robust Graph Foundation Models with Structure-Aware Semantic Augmentation [20.028450229306554]
We present SA2GFM, a robust Graph Foundation Models (GFMs) framework that improves domain-adaptive representations.<n>We show that SA2GFM outperforms 9 state-of-the-art baselines in terms of effectiveness and robustness against random noise and adversarial perturbations for node and graph classification.
arXiv Detail & Related papers (2025-11-26T08:26:01Z) - HiGFA: Hierarchical Guidance for Fine-grained Data Augmentation with Diffusion Models [82.10385962490051]
Generative diffusion models show promise for data augmentation.<n>Applying them to fine-grained tasks presents a significant challenge.<n>HiGFA is a hierarchical, confidence-driven orchestration that generates diverse yet faithful synthetic images.
arXiv Detail & Related papers (2025-11-16T10:46:16Z) - Parameter-Free Structural-Diversity Message Passing for Graph Neural Networks [8.462209415744098]
Graph Neural Networks (GNNs) have shown remarkable performance in structured data modeling tasks such as node classification.<n>This paper proposes a parameter-free graph neural network framework based on structural diversity.<n>The framework is inspired by structural diversity theory and designs a unified structural-diversity message passing mechanism.
arXiv Detail & Related papers (2025-08-27T13:42:45Z) - AlphaFold Database Debiasing for Robust Inverse Folding [58.792020809180336]
We introduce a Debiasing Structure AutoEncoder (DeSAE) that learns to reconstruct native-like conformations from intentionally corrupted backbone geometries.<n>At inference, applying DeSAE to AFDB structures produces debiased structures that significantly improve inverse folding performance.
arXiv Detail & Related papers (2025-06-10T02:25:31Z) - Robust Federated Learning on Edge Devices with Domain Heterogeneity [13.362209980631876]
Federated Learning (FL) allows collaborative training while ensuring data privacy across distributed edge devices.<n>We introduce a new framework to address this challenge by improving the generalization ability of the FL global model.<n>We introduce FedAPC, a prototype-based FL framework designed to enhance feature diversity and model robustness.
arXiv Detail & Related papers (2025-05-15T09:53:14Z) - DeCaf: A Causal Decoupling Framework for OOD Generalization on Node Classification [14.96980804513399]
Graph Neural Networks (GNNs) are susceptible to distribution shifts, creating vulnerability and security issues in critical domains.
Existing methods that target learning an invariant (feature, structure)-label mapping often depend on oversimplified assumptions about the data generation process.
We introduce a more realistic graph data generation model using Structural Causal Models (SCMs)
We propose a casual decoupling framework, DeCaf, that independently learns unbiased feature-label and structure-label mappings.
arXiv Detail & Related papers (2024-10-27T00:22:18Z) - TopoFR: A Closer Look at Topology Alignment on Face Recognition [58.45515807380505]
We propose TopoFR, a novel FR model that leverages a topological structure alignment strategy called PTSA and a hard sample mining strategy named SDE.<n> PTSA uses persistent homology to align the topological structures of the input and latent spaces, effectively preserving the structure information and improving the generalization performance of FR model.<n> Experimental results on popular face benchmarks demonstrate the superiority of our TopoFR over the state-of-the-art methods.
arXiv Detail & Related papers (2024-10-14T14:58:30Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Domain Adaptation by Topology Regularization [0.0]
Domain adaptation (DA) or transfer learning (TL) enables algorithms to transfer knowledge from a labelled (source) data set to an unlabelled but related (target) data set of interest.
We propose to leverage global data structure by applying a topological data analysis technique called persistent homology to TL.
arXiv Detail & Related papers (2021-01-28T16:45:41Z) - Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain
Adaptation using Structurally Regularized Deep Clustering [119.88565565454378]
Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain.
We propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one.
Our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings.
arXiv Detail & Related papers (2020-12-08T08:52:00Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.