Robustness in Text-Attributed Graph Learning: Insights, Trade-offs, and New Defenses
- URL: http://arxiv.org/abs/2510.17185v1
- Date: Mon, 20 Oct 2025 05:57:54 GMT
- Title: Robustness in Text-Attributed Graph Learning: Insights, Trade-offs, and New Defenses
- Authors: Runlin Lei, Lu Yi, Mingguo He, Pengyu Qiu, Zhewei Wei, Yongchao Liu, Chuntao Hong,
- Abstract summary: We introduce a unified and comprehensive framework to evaluate robustness in TAG learning.<n>Our framework evaluates classical GNNs, robust GNNs (RGNNs), and GraphLLMs across ten datasets from four domains.<n>Our work establishes a foundation for future research on TAG security and offers practical solutions for robust TAG learning in adversarial environments.
- Score: 34.0252107920933
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While Graph Neural Networks (GNNs) and Large Language Models (LLMs) are powerful approaches for learning on Text-Attributed Graphs (TAGs), a comprehensive understanding of their robustness remains elusive. Current evaluations are fragmented, failing to systematically investigate the distinct effects of textual and structural perturbations across diverse models and attack scenarios. To address these limitations, we introduce a unified and comprehensive framework to evaluate robustness in TAG learning. Our framework evaluates classical GNNs, robust GNNs (RGNNs), and GraphLLMs across ten datasets from four domains, under diverse text-based, structure-based, and hybrid perturbations in both poisoning and evasion scenarios. Our extensive analysis reveals multiple findings, among which three are particularly noteworthy: 1) models have inherent robustness trade-offs between text and structure, 2) the performance of GNNs and RGNNs depends heavily on the text encoder and attack type, and 3) GraphLLMs are particularly vulnerable to training data corruption. To overcome the identified trade-offs, we introduce SFT-auto, a novel framework that delivers superior and balanced robustness against both textual and structural attacks within a single model. Our work establishes a foundation for future research on TAG security and offers practical solutions for robust TAG learning in adversarial environments. Our code is available at: https://github.com/Leirunlin/TGRB.
Related papers
- GRAPHTEXTACK: A Realistic Black-Box Node Injection Attack on LLM-Enhanced GNNs [17.77340454481932]
Recent work integrates Large Language Models with Graph Neural Networks (GNNs) to jointly model semantics and structure.<n>This integration introduces dual vulnerabilities: GNNs are sensitive to structural perturbations, while LLM-derived features are vulnerable to prompt injection and adversarial perturbations.<n>To address these gaps, we propose GRAPH TEXTACK, the first black-box, multi-modal, poisoning node injection attack for LLM-enhanced GNNs.
arXiv Detail & Related papers (2025-11-16T02:42:48Z) - Unveiling the Vulnerability of Graph-LLMs: An Interpretable Multi-Dimensional Adversarial Attack on TAGs [35.900360659024585]
Interpretable Multi-Dimensional Graph Attack (IMDGA) is a novel human-centric adversarial attack framework for Graph-LLMs.<n>IMDGA demonstrates superior interpretability, attack effectiveness, stealthiness, and robustness compared to existing methods.<n>This work uncovers a previously underexplored semantic dimension of vulnerability in Graph-LLMs, offering valuable insights for improving their resilience.
arXiv Detail & Related papers (2025-10-14T07:36:07Z) - TrustGLM: Evaluating the Robustness of GraphLLMs Against Prompt, Text, and Structure Attacks [3.3238054848751535]
We introduce TrustGLM, a comprehensive study evaluating the vulnerability of GraphLLMs to adversarial attacks across three dimensions: text, graph structure, and prompt manipulations.<n>Our findings reveal that GraphLLMs are highly susceptible to text attacks that merely replace a few semantically similar words in a node's textual attribute.<n>We also find that standard graph structure attack methods can significantly degrade model performance, while random shuffling of the candidate label set in prompt templates leads to substantial performance drops.
arXiv Detail & Related papers (2025-06-13T14:48:01Z) - Robustness questions the interpretability of graph neural networks: what to do? [0.10713888959520207]
Graph Neural Networks (GNNs) have become a cornerstone in graph-based data analysis.<n>This paper presents a benchmark to systematically analyze the impact of various factors on the interpretability of GNNs.<n>We evaluate six GNN architectures based on GCN, SAGE, GIN, and GAT across five datasets from two distinct domains.
arXiv Detail & Related papers (2025-05-05T11:14:56Z) - Integrating Structural and Semantic Signals in Text-Attributed Graphs with BiGTex [0.0]
BiGTex is a novel architecture that tightly integrates GNNs and LLMs through stacked Graph-Text Fusion Units.<n>BiGTex achieves state-of-the-art performance in node classification and generalizes effectively to link prediction.
arXiv Detail & Related papers (2025-04-16T20:25:11Z) - Advanced Text Analytics -- Graph Neural Network for Fake News Detection in Social Media [0.0]
Advanced Text Analysis Graph Neural Network (ATA-GNN) is proposed in this paper.<n>ATA-GNN employs innovative topic modelling (clustering) techniques to identify typical words for each topic.<n>Extensive evaluations on widely used benchmark datasets demonstrate that ATA-GNN surpasses the performance of current GNN-based FND methods.
arXiv Detail & Related papers (2025-02-22T09:17:33Z) - Can Graph Neural Networks Learn Language with Extremely Weak Text Supervision? [62.12375949429938]
We propose a multi-modal prompt learning paradigm to adapt pre-trained Graph Neural Networks to downstream tasks and data.<n>Our new paradigm embeds the graphs directly in the same space as the Large Language Models (LLMs) by learning both graph prompts and text prompts simultaneously.<n>We build the first CLIP-style zero-shot classification prototype that can generalize GNNs to unseen classes with extremely weak text supervision.
arXiv Detail & Related papers (2024-12-11T08:03:35Z) - Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - InfoBERT: Improving Robustness of Language Models from An Information
Theoretic Perspective [84.78604733927887]
Large-scale language models such as BERT have achieved state-of-the-art performance across a wide range of NLP tasks.
Recent studies show that such BERT-based models are vulnerable facing the threats of textual adversarial attacks.
We propose InfoBERT, a novel learning framework for robust fine-tuning of pre-trained language models.
arXiv Detail & Related papers (2020-10-05T20:49:26Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.