AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement
Learning
- URL: http://arxiv.org/abs/2402.13946v2
- Date: Mon, 26 Feb 2024 20:18:38 GMT
- Title: AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement
Learning
- Authors: Vasudev Gohil, Satwik Patnaik, Dileep Kalathil, Jeyavijayan Rajendran
- Abstract summary: We propose AttackGNN, the first red-team attack on GNN-based techniques in hardware security.
We target five GNN-based techniques for four crucial classes of problems in hardware security: IP piracy, detecting/localizing HTs, reverse engineering, and hardware obfuscation.
- Score: 16.751700469734708
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning has shown great promise in addressing several critical
hardware security problems. In particular, researchers have developed novel
graph neural network (GNN)-based techniques for detecting intellectual property
(IP) piracy, detecting hardware Trojans (HTs), and reverse engineering
circuits, to name a few. These techniques have demonstrated outstanding
accuracy and have received much attention in the community. However, since
these techniques are used for security applications, it is imperative to
evaluate them thoroughly and ensure they are robust and do not compromise the
security of integrated circuits.
In this work, we propose AttackGNN, the first red-team attack on GNN-based
techniques in hardware security. To this end, we devise a novel reinforcement
learning (RL) agent that generates adversarial examples, i.e., circuits,
against the GNN-based techniques. We overcome three challenges related to
effectiveness, scalability, and generality to devise a potent RL agent. We
target five GNN-based techniques for four crucial classes of problems in
hardware security: IP piracy, detecting/localizing HTs, reverse engineering,
and hardware obfuscation. Through our approach, we craft circuits that fool all
GNNs considered in this work. For instance, to evade IP piracy detection, we
generate adversarial pirated circuits that fool the GNN-based defense into
classifying our crafted circuits as not pirated. For attacking HT localization
GNN, our attack generates HT-infested circuits that fool the defense on all
tested circuits. We obtain a similar 100% success rate against GNNs for all
classes of problems.
Related papers
- Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks [3.9444202574850755]
Spiking Neural Networks (SNNs) are known for their low energy consumption and high robustness.
This paper explores the robustness performance of SNNs trained by supervised learning rules under backdoor attacks.
arXiv Detail & Related papers (2024-09-24T02:15:19Z) - SNNGX: Securing Spiking Neural Networks with Genetic XOR Encryption on RRAM-based Neuromorphic Accelerator [34.474841993360855]
Spiking Neural Networks (SNNs), characterized by spike sparsity, are growing tremendous attention over intellectual edge devices and critical bio-medical applications.
However, there is a considerable risk from malicious attempts to extract white-box information from SNNs.
We present a novel secure software- hardware co-designed RRAM-based neuromorphic accelerator for protecting the IP of SNNs.
arXiv Detail & Related papers (2024-07-21T13:08:05Z) - Evasive Hardware Trojan through Adversarial Power Trace [6.949268510101616]
We introduce a HT obfuscation (HTO) approach to allow HTs to bypass detection method.
HTO can be implemented with only a single transistor for ASICs and FPGAs.
We show that an adaptive attacker can still design evasive HTOs by constraining the design with a spectral noise budget.
arXiv Detail & Related papers (2024-01-04T16:28:15Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - On Strengthening and Defending Graph Reconstruction Attack with Markov
Chain Approximation [40.21760151203987]
We study the first comprehensive study of graph reconstruction attack that aims to reconstruct the adjacency of nodes.
We show that a range of factors in GNNs can lead to the surprising leakage of private links.
We propose two information theory-guided mechanisms: (1) the chain-based attack method with adaptive designs for extracting more private information; (2) the chain-based defense method that sharply reduces the attack fidelity with moderate accuracy loss.
arXiv Detail & Related papers (2023-06-15T13:00:56Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z) - Skip Connections Matter: On the Transferability of Adversarial Examples
Generated with ResNets [83.12737997548645]
Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs)
Use of skip connections allows easier generation of highly transferable adversarial examples.
We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet)
arXiv Detail & Related papers (2020-02-14T12:09:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.