Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level
- URL: http://arxiv.org/abs/2405.16405v2
- Date: Fri, 01 Nov 2024 12:15:36 GMT
- Title: Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level
- Authors: Runlin Lei, Yuwei Hu, Yuchen Ren, Zhewei Wei,
- Abstract summary: Graph Neural Networks (GNNs) excel across various applications but remain vulnerable to adversarial attacks.
In this paper, we pioneer the exploration of Graph Injection Attacks (GIAs) at the text level.
We show that text interpretability, a factor previously overlooked at the embedding level, plays a crucial role in attack strength.
- Score: 21.003091265006102
- License:
- Abstract: Graph Neural Networks (GNNs) excel across various applications but remain vulnerable to adversarial attacks, particularly Graph Injection Attacks (GIAs), which inject malicious nodes into the original graph and pose realistic threats. Text-attributed graphs (TAGs), where nodes are associated with textual features, are crucial due to their prevalence in real-world applications and are commonly used to evaluate these vulnerabilities. However, existing research only focuses on embedding-level GIAs, which inject node embeddings rather than actual textual content, limiting their applicability and simplifying detection. In this paper, we pioneer the exploration of GIAs at the text level, presenting three novel attack designs that inject textual content into the graph. Through theoretical and empirical analysis, we demonstrate that text interpretability, a factor previously overlooked at the embedding level, plays a crucial role in attack strength. Among the designs we investigate, the Word-frequency-based Text-level GIA (WTGIA) is particularly notable for its balance between performance and interpretability. Despite the success of WTGIA, we discover that defenders can easily enhance their defenses with customized text embedding methods or large language model (LLM)--based predictors. These insights underscore the necessity for further research into the potential and practical significance of text-level GIAs.
Related papers
- Is the Digital Forensics and Incident Response Pipeline Ready for Text-Based Threats in LLM Era? [3.3205885925042704]
In the era of generative AI, the widespread adoption of Neural Text Generators (NTGs) presents new cybersecurity challenges.
This paper rigorously evaluates the DFIR pipeline tailored for text-based security systems.
By introducing a novel human-NTG co-authorship text attack, our study uncovers significant vulnerabilities in traditional DFIR methodologies.
arXiv Detail & Related papers (2024-07-25T08:42:53Z) - Relaxing Graph Transformers for Adversarial Attacks [49.450581960551276]
Graph Transformers (GTs) surpassed Message-Passing GNNs on several benchmarks, their adversarial robustness properties are unexplored.
We overcome these challenges by targeting three representative architectures based on (1) random-walk PEs, (2) pair-wise-short-paths, and (3) spectral perturbations.
Our evaluation reveals that they can be catastrophically fragile and underlines our work's importance and the necessity for adaptive attacks.
arXiv Detail & Related papers (2024-07-16T14:24:58Z) - Attacks on Node Attributes in Graph Neural Networks [32.40598187698689]
This research investigates the vulnerability of graph models through the application of feature based adversarial attacks.
Our findings indicate that decision time attacks using Projected Gradient Descent (PGD) are more potent compared to poisoning attacks that employ Mean Node Embeddings and Graph Contrastive Learning strategies.
arXiv Detail & Related papers (2024-02-19T17:52:29Z) - GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation [61.80017550099027]
Graph Neural Networks (GNNs) are increasingly prevalent in a variety of fields.
Growing concerns have emerged regarding the unauthorized utilization of personal data.
Recent studies have shown that imperceptible poisoning attacks are an effective method of protecting image data from such misuse.
This paper introduces GraphCloak to safeguard against the unauthorized usage of graph data.
arXiv Detail & Related papers (2023-10-11T00:50:55Z) - EDoG: Adversarial Edge Detection For Graph Neural Networks [17.969573886307906]
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks.
Recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node or subgraph classification prediction by adding subtle perturbations.
We propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation.
arXiv Detail & Related papers (2022-12-27T20:42:36Z) - Learning-based Hybrid Local Search for the Hard-label Textual Attack [53.92227690452377]
We consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker could only access the prediction label.
Based on this observation, we propose a novel hard-label attack, called Learning-based Hybrid Local Search (LHLS) algorithm.
Our LHLS significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2022-01-20T14:16:07Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.