SA^2GFM: Enhancing Robust Graph Foundation Models with Structure-Aware Semantic Augmentation
- URL: http://arxiv.org/abs/2512.07857v1
- Date: Wed, 26 Nov 2025 08:26:01 GMT
- Title: SA^2GFM: Enhancing Robust Graph Foundation Models with Structure-Aware Semantic Augmentation
- Authors: Junhua Shi, Qingyun Sun, Haonan Yuan, Xingcheng Fu,
- Abstract summary: We present SA2GFM, a robust Graph Foundation Models (GFMs) framework that improves domain-adaptive representations.<n>We show that SA2GFM outperforms 9 state-of-the-art baselines in terms of effectiveness and robustness against random noise and adversarial perturbations for node and graph classification.
- Score: 20.028450229306554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Graph Foundation Models (GFMs) which have made significant progress in various tasks, but their robustness against domain noise, structural perturbations, and adversarial attacks remains underexplored. A key limitation is the insufficient modeling of hierarchical structural semantics, which are crucial for generalization. In this paper, we propose SA^2GFM, a robust GFM framework that improves domain-adaptive representations through Structure-Aware Semantic Augmentation. First, we encode hierarchical structural priors by transforming entropy-based encoding trees into structure-aware textual prompts for feature augmentation. The enhanced inputs are processed by a self-supervised Information Bottleneck mechanism that distills robust, transferable representations via structure-guided compression. To address negative transfer in cross-domain adaptation, we introduce an expert adaptive routing mechanism, combining a mixture-of-experts architecture with a null expert design. For efficient downstream adaptation, we propose a fine-tuning module that optimizes hierarchical structures through joint intra- and inter-community structure learning. Extensive experiments demonstrate that SA^2GFM outperforms 9 state-of-the-art baselines in terms of effectiveness and robustness against random noise and adversarial perturbations for node and graph classification.
Related papers
- StepVAR: Structure-Texture Guided Pruning for Visual Autoregressive Models [98.72926158261937]
We propose a training-free token pruning framework for Visual AutoRegressive models.<n>We employ a lightweight high-pass filter to capture local texture details, while leveraging Principal Component Analysis (PCA) to preserve global structural information.<n>To maintain valid next-scale prediction under sparse tokens, we introduce a nearest neighbor feature propagation strategy.
arXiv Detail & Related papers (2026-03-02T11:35:05Z) - USBD: Universal Structural Basis Distillation for Source-Free Graph Domain Adaptation [28.47018372381707]
SF-GDA is pivotal for privacy-preserving knowledge transfer across graph datasets.<n>We propose the Universal Structural Basis Distillation, a framework that shifts the paradigm from adapting a biased model to learning a universal structural basis for SF-GDA.
arXiv Detail & Related papers (2026-02-09T09:39:07Z) - RAG-GFM: Overcoming In-Memory Bottlenecks in Graph Foundation Models via Retrieval-Augmented Generation [27.59455285600957]
Graph Foundation Models (GFMs) have emerged as a frontier in graph learning, which are expected to deliver transferable representations across diverse tasks.<n>We propose RAG-GFM, a Retrieval-Augmented Generation aided Graph Foundation Model that offloads knowledge from parameters.<n>We show that RAG-GFM consistently outperforms 13 state-of-the-art baselines in both cross-domain node and graph classification.
arXiv Detail & Related papers (2026-01-21T16:02:43Z) - Granular-ball Guided Masking: Structure-aware Data Augmentation [97.18560547134587]
Granular-ball Guided Masking (GBGM) is a structure-aware augmentation strategy guided by Granular-ball Computing (GBC)<n>GBGM adaptively preserves semantically rich, structurally important regions while suppressing redundant areas through a coarse-to-fine hierarchical masking process.<n>Experiments on multiple benchmarks demonstrate consistent improvements in classification accuracy and masked image reconstruction.
arXiv Detail & Related papers (2025-12-24T07:15:33Z) - TreeFedDG: Alleviating Global Drift in Federated Domain Generalization for Medical Image Segmentation [11.110381445769953]
We propose a novel tree topology framework called TreeFedDG for medical imaging (FedDG-GD)<n>First, we design a hierarchical parameter aggregation method based on a tree-structured topology to suppress deviations in the global model direction.<n>Second, we introduce a parameter difference-based style mixing method (FedStyle), which enforces mixing among clients with maximum parameter differences to enhance against drift.<n>Third, we develop a progressive personalized fusion strategy during model distribution, ensuring a balance between knowledge transfer and personalized features.
arXiv Detail & Related papers (2025-10-21T03:38:05Z) - Structure-R1: Dynamically Leveraging Structural Knowledge in LLM Reasoning through Reinforcement Learning [29.722512436773638]
We propose textscStructure-R1, a framework that transforms retrieved content into structured representations optimized for reasoning.<n>We show that textscStructure-R1 consistently achieves competitive performance with a 7B-scale backbone model.<n>Our theoretical analysis demonstrates how structured representations enhance reasoning by improving information density and contextual clarity.
arXiv Detail & Related papers (2025-10-16T23:19:28Z) - ReSSFormer: A Recursive Sparse Structured Transformer for Scalable and Long-Context Reasoning [0.0]
We present ReSSFormer, a Recursive Sparse Structured Transformer that integrates three complementary innovations.<n>ReSSFormer replaces conventional depth stacking with recurrent inference, substitutes full attention with token- and expert-level sparsity, and models latent token topology directly from content.
arXiv Detail & Related papers (2025-10-02T02:05:30Z) - Topology-Aware Graph Reinforcement Learning for Dynamic Routing in Cloud Networks [7.718608301354158]
Method builds a unified framework for state representation and structural evolution.<n>It aims to tackle the challenges of decision instability and insufficient structural awareness under dynamic topologies.<n>Results show that the proposed method achieves efficient and robust routing in dynamic and complex cloud networks.
arXiv Detail & Related papers (2025-09-05T09:55:28Z) - Towards Efficient General Feature Prediction in Masked Skeleton Modeling [59.46799426434277]
We propose a novel General Feature Prediction framework (GFP) for efficient mask skeleton modeling.<n>Our key innovation is replacing conventional low-level reconstruction with high-level feature prediction that spans from local motion patterns to global semantic representations.
arXiv Detail & Related papers (2025-09-03T18:05:02Z) - Hierarchical Graph Feature Enhancement with Adaptive Frequency Modulation for Visual Recognition [6.580655899524989]
Convolutional neural networks (CNNs) have demonstrated strong performance in visual recognition tasks.<n>We propose a novel framework that integrates graph-based rea soning into CNNs to enhance both structural awareness and feature representation.<n>The proposed HGFE module is lightweight, end-to-end trainable, and can be seamlessly integrated into standard CNN backbone networks.
arXiv Detail & Related papers (2025-08-15T14:19:50Z) - Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - Enhanced Structured State Space Models via Grouped FIR Filtering and Attention Sink Mechanisms [0.6718184400443239]
We propose an advanced architecture that mitigates challenges by decomposing A-multiplications into multiple groups.
Inspired by the "attention sink" phenomenon identified in streaming language models, we incorporate a similar mechanism to enhance the stability and performance of our model.
arXiv Detail & Related papers (2024-08-01T02:49:58Z) - Isomorphic Pruning for Vision Models [56.286064975443026]
Structured pruning reduces the computational overhead of deep neural networks by removing redundant sub-structures.
We present Isomorphic Pruning, a simple approach that demonstrates effectiveness across a range of network architectures.
arXiv Detail & Related papers (2024-07-05T16:14:53Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - SE-GSL: A General and Effective Graph Structure Learning Framework
through Structural Entropy Optimization [67.28453445927825]
Graph Neural Networks (GNNs) are de facto solutions to structural data learning.
Existing graph structure learning (GSL) frameworks still lack robustness and interpretability.
This paper proposes a general GSL framework, SE-GSL, through structural entropy and the graph hierarchy abstracted in the encoding tree.
arXiv Detail & Related papers (2023-03-17T05:20:24Z) - DepGraph: Towards Any Structural Pruning [68.40343338847664]
We study general structural pruning of arbitrary architecture like CNNs, RNNs, GNNs and Transformers.
We propose a general and fully automatic method, emphDependency Graph (DepGraph), to explicitly model the dependency between layers and comprehensively group parameters for pruning.
In this work, we extensively evaluate our method on several architectures and tasks, including ResNe(X)t, DenseNet, MobileNet and Vision transformer for images, GAT for graph, DGCNN for 3D point cloud, alongside LSTM for language, and demonstrate that, even with a
arXiv Detail & Related papers (2023-01-30T14:02:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.