Navigating the Black Box: Leveraging LLMs for Effective Text-Level Graph Injection Attacks
- URL: http://arxiv.org/abs/2506.13276v1
- Date: Mon, 16 Jun 2025 09:16:21 GMT
- Title: Navigating the Black Box: Leveraging LLMs for Effective Text-Level Graph Injection Attacks
- Authors: Yuefei Lyu, Chaozhuo Li, Xi Zhang, Tianle Zhang,
- Abstract summary: This paper introduces ATAG-LLM, a novel black-box GIA framework tailored for text-attributed graphs (TAGs)<n>Our approach leverages large language models (LLMs) to generate interpretable text-level node attributes directly.<n>This method efficiently perturbs the target node with minimal training costs in a strict black-box setting, ensuring a text-level graph injection attack for TAGs.
- Score: 14.181622082567124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-attributed graphs (TAGs) integrate textual data with graph structures, providing valuable insights in applications such as social network analysis and recommendation systems. Graph Neural Networks (GNNs) effectively capture both topological structure and textual information in TAGs but are vulnerable to adversarial attacks. Existing graph injection attack (GIA) methods assume that attackers can directly manipulate the embedding layer, producing non-explainable node embeddings. Furthermore, the effectiveness of these attacks often relies on surrogate models with high training costs. Thus, this paper introduces ATAG-LLM, a novel black-box GIA framework tailored for TAGs. Our approach leverages large language models (LLMs) to generate interpretable text-level node attributes directly, ensuring attacks remain feasible in real-world scenarios. We design strategies for LLM prompting that balance exploration and reliability to guide text generation, and propose a similarity assessment method to evaluate attack text effectiveness in disrupting graph homophily. This method efficiently perturbs the target node with minimal training costs in a strict black-box setting, ensuring a text-level graph injection attack for TAGs. Experiments on real-world TAG datasets validate the superior performance of ATAG-LLM compared to state-of-the-art embedding-level and text-level attack methods.
Related papers
- DGP: A Dual-Granularity Prompting Framework for Fraud Detection with Graph-Enhanced LLMs [55.13817504780764]
Real-world fraud detection applications benefit from graph learning techniques that jointly exploit node features, often rich in textual data, and graph structural information.<n>Graph-Enhanced LLMs emerge as a promising graph learning approach that converts graph information into prompts.<n>We propose Dual Granularity Prompting (DGP), which mitigates information overload by preserving fine-grained textual details for the target node.
arXiv Detail & Related papers (2025-07-29T10:10:47Z) - TrustGLM: Evaluating the Robustness of GraphLLMs Against Prompt, Text, and Structure Attacks [3.3238054848751535]
We introduce TrustGLM, a comprehensive study evaluating the vulnerability of GraphLLMs to adversarial attacks across three dimensions: text, graph structure, and prompt manipulations.<n>Our findings reveal that GraphLLMs are highly susceptible to text attacks that merely replace a few semantically similar words in a node's textual attribute.<n>We also find that standard graph structure attack methods can significantly degrade model performance, while random shuffling of the candidate label set in prompt templates leads to substantial performance drops.
arXiv Detail & Related papers (2025-06-13T14:48:01Z) - Efficient Text-Attributed Graph Learning through Selective Annotation and Graph Alignment [24.0890725396281]
We introduce GAGA, an efficient framework for TAG representation learning.<n>It reduces annotation time and cost by focusing on annotating only representative nodes and edges.<n>Experiments show that GAGA classification achieves accuracies on par with or surpassing state-of-the-art methods.
arXiv Detail & Related papers (2025-06-08T14:34:29Z) - Few-Shot Graph Out-of-Distribution Detection with LLMs [34.42512005781724]
We propose a framework that combines the strengths of large language models (LLMs) and graph neural networks (GNNs) to enhance data efficiency in graph out-of-distribution (OOD) detection.<n>We show that LLM-GOOD significantly reduces human annotation costs and outperforms state-of-the-art baselines in terms of both ID classification accuracy and OOD detection performance.
arXiv Detail & Related papers (2025-03-28T02:37:18Z) - AHSG: Adversarial Attack on High-level Semantics in Graph Neural Networks [8.512355226572254]
Adversarial attacks on Graph Neural Networks aim to perturb the performance of the learner by carefully modifying the graph topology and node attributes.<n>Existing methods achieve attack stealthiness by constraining the modification budget and differences in graph properties.<n>We propose an Adversarial Attack on High-level Semantics for Graph Neural Networks (AHSG), which is a graph structure attack model that ensures the retention of primary semantics.
arXiv Detail & Related papers (2024-12-10T12:35:37Z) - SaVe-TAG: Semantic-aware Vicinal Risk Minimization for Long-Tailed Text-Attributed Graphs [16.24571541782205]
Real-world graph data often follows long-tailed distributions, making it difficult for Graph Neural Networks (GNNs) to generalize well across both head and tail classes.<n>Recent advances in Vicinal Risk Minimization (VRM) have shown promise in mitigating class imbalance with numeric semantics.
arXiv Detail & Related papers (2024-10-22T10:36:15Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level [21.003091265006102]
Graph Neural Networks (GNNs) excel across various applications but remain vulnerable to adversarial attacks.
In this paper, we pioneer the exploration of Graph Injection Attacks (GIAs) at the text level.
We show that text interpretability, a factor previously overlooked at the embedding level, plays a crucial role in attack strength.
arXiv Detail & Related papers (2024-05-26T02:12:02Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.