Defense against ML-based Power Side-channel Attacks on DNN Accelerators with Adversarial Attacks
- URL: http://arxiv.org/abs/2312.04035v1
- Date: Thu, 7 Dec 2023 04:38:01 GMT
- Title: Defense against ML-based Power Side-channel Attacks on DNN Accelerators with Adversarial Attacks
- Authors: Xiaobei Yan, Chip Hong Chang, Tianwei Zhang,
- Abstract summary: We present AIAShield, a novel defense methodology to safeguard FPGA-based AI accelerators.
We leverage the prominent adversarial attack technique from the machine learning community to craft delicate noise.
AIAShield outperforms existing solutions with excellent transferability.
- Score: 21.611341074006162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) hardware accelerators have been widely adopted to enhance the efficiency of deep learning applications. However, they also raise security concerns regarding their vulnerability to power side-channel attacks (SCA). In these attacks, the adversary exploits unintended communication channels to infer sensitive information processed by the accelerator, posing significant privacy and copyright risks to the models. Advanced machine learning algorithms are further employed to facilitate the side-channel analysis and exacerbate the privacy issue of AI accelerators. Traditional defense strategies naively inject execution noise to the runtime of AI models, which inevitably introduce large overheads. In this paper, we present AIAShield, a novel defense methodology to safeguard FPGA-based AI accelerators and mitigate model extraction threats via power-based SCAs. The key insight of AIAShield is to leverage the prominent adversarial attack technique from the machine learning community to craft delicate noise, which can significantly obfuscate the adversary's side-channel observation while incurring minimal overhead to the execution of the protected model. At the hardware level, we design a new module based on ring oscillators to achieve fine-grained noise generation. At the algorithm level, we repurpose Neural Architecture Search to worsen the adversary's extraction results. Extensive experiments on the Nvidia Deep Learning Accelerator (NVDLA) demonstrate that AIAShield outperforms existing solutions with excellent transferability.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Enhancing Physical Layer Communication Security through Generative AI with Mixture of Experts [80.0638227807621]
generative artificial intelligence (GAI) models have demonstrated superiority over conventional AI methods.
MoE, which uses multiple expert models for prediction through a gate mechanism, proposes possible solutions.
arXiv Detail & Related papers (2024-05-07T11:13:17Z) - A Unified Hardware-based Threat Detector for AI Accelerators [12.96840649714218]
We design UniGuard, a novel unified and non-intrusive detection methodology to safeguard FPGA-based AI accelerators.
We employ a Time-to-Digital Converter to capture power fluctuations and train a supervised machine learning model to identify various types of threats.
arXiv Detail & Related papers (2023-11-28T10:55:02Z) - Generalized Power Attacks against Crypto Hardware using Long-Range Deep Learning [6.409047279789011]
GPAM is a deep-learning system for power side-channel analysis.
It generalizes across multiple cryptographic algorithms, implementations, and side-channel countermeasures.
We demonstrate GPAM's capability by successfully attacking four hardened hardware-accelerated elliptic-curve digital-signature implementations.
arXiv Detail & Related papers (2023-06-12T17:16:26Z) - Practical Adversarial Attacks Against AI-Driven Power Allocation in a
Distributed MIMO Network [0.0]
In distributed multiple-input multiple-output (D-MIMO) networks, power control is crucial to optimize the spectral efficiencies of users.
Deep neural network based artificial intelligence (AI) solutions are proposed to decrease the complexity.
In this work, we show that threats against the target AI model which might be originated from malicious users or radio units can substantially decrease the network performance.
arXiv Detail & Related papers (2023-01-23T07:51:25Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Attacking Deep Learning AI Hardware with Universal Adversarial
Perturbation [0.0]
Universal Adversarial Perturbations can seriously jeopardize the security and integrity of practical Deep Learning applications.
We demonstrate an attack strategy that when activated by rogue means (e.g., malware, trojan) can bypass existing countermeasures.
arXiv Detail & Related papers (2021-11-18T02:54:10Z) - Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks [7.20382137043754]
A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
arXiv Detail & Related papers (2020-08-03T21:55:41Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.