Multi-Instance Adversarial Attack on GNN-Based Malicious Domain
Detection
- URL: http://arxiv.org/abs/2308.11754v1
- Date: Tue, 22 Aug 2023 19:51:16 GMT
- Title: Multi-Instance Adversarial Attack on GNN-Based Malicious Domain
Detection
- Authors: Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, and Yao
Ma
- Abstract summary: Malicious domain detection (MDD) is an open security challenge that aims to detect if an Internet domain is associated with cyber-attacks.
GNN-based MDD uses DNS logs to represent Internet domains as nodes in a maliciousness graph (DMG)
We introduce MintA, an inference-time multi-instance adversarial attack on GNN-based MDDs.
- Score: 8.072660302473508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malicious domain detection (MDD) is an open security challenge that aims to
detect if an Internet domain is associated with cyber-attacks. Among many
approaches to this problem, graph neural networks (GNNs) are deemed highly
effective. GNN-based MDD uses DNS logs to represent Internet domains as nodes
in a maliciousness graph (DMG) and trains a GNN to infer their maliciousness by
leveraging identified malicious domains. Since this method relies on accessible
DNS logs to construct DMGs, it exposes a vulnerability for adversaries to
manipulate their domain nodes' features and connections within DMGs. Existing
research mainly concentrates on threat models that manipulate individual
attacker nodes. However, adversaries commonly generate multiple domains to
achieve their goals economically and avoid detection. Their objective is to
evade discovery across as many domains as feasible. In this work, we call the
attack that manipulates several nodes in the DMG concurrently a multi-instance
evasion attack. We present theoretical and empirical evidence that the existing
single-instance evasion techniques for are inadequate to launch multi-instance
evasion attacks against GNN-based MDDs. Therefore, we introduce MintA, an
inference-time multi-instance adversarial attack on GNN-based MDDs. MintA
enhances node and neighborhood evasiveness through optimized perturbations and
operates successfully with only black-box access to the target model,
eliminating the need for knowledge about the model's specifics or non-adversary
nodes. We formulate an optimization challenge for MintA, achieving an
approximate solution. Evaluating MintA on a leading GNN-based MDD technique
with real-world data showcases an attack success rate exceeding 80%. These
findings act as a warning for security experts, underscoring GNN-based MDDs'
susceptibility to practical attacks that can undermine their effectiveness and
benefits.
Related papers
- ADAGE: Active Defenses Against GNN Extraction [9.707239870468735]
Graph Neural Networks (GNNs) achieve high performance in various real-world applications, such as drug discovery, traffic states prediction, and recommendation systems.
The threat vector of stealing attacks against GNNs is large and diverse.
We propose the first and general Active Defense Against GNN Extraction (ADAGE)
arXiv Detail & Related papers (2025-02-27T10:56:11Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Identifying Backdoored Graphs in Graph Neural Network Training: An Explanation-Based Approach with Novel Metrics [13.93535590008316]
Graph Neural Networks (GNNs) have gained popularity in numerous domains, yet they are vulnerable to backdoor attacks.
We devised a novel detection method that creatively leverages graph-level explanations.
Our results show that our method can achieve high detection performance, marking a significant advancement in safeguarding GNNs against backdoor attacks.
arXiv Detail & Related papers (2024-03-26T22:41:41Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Transferable Graph Backdoor Attack [13.110473828583725]
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks.
GNNs are found to be vulnerable to unnoticeable perturbations on both graph structure and node features.
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
arXiv Detail & Related papers (2022-06-21T06:25:37Z) - GUARD: Graph Universal Adversarial Defense [54.81496179947696]
We present a simple yet effective method, named Graph Universal Adversarial Defense (GUARD)
GUARD protects each individual node from attacks with a universal defensive patch, which is generated once and can be applied to any node in a graph.
GUARD significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms state-of-the-art defense methods by large margins.
arXiv Detail & Related papers (2022-04-20T22:18:12Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Single-Node Attack for Fooling Graph Neural Networks [5.7923858184309385]
Graph neural networks (GNNs) have shown broad applicability in a variety of domains.
Some of these domains, such as social networks and product recommendations, are fertile ground for malicious users and behavior.
In this paper, we show that GNNs are vulnerable to the extremely limited scenario of a single-node adversarial example.
arXiv Detail & Related papers (2020-11-06T19:59:39Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.