PowerGAN: A Machine Learning Approach for Power Side-Channel Attack on
Compute-in-Memory Accelerators
- URL: http://arxiv.org/abs/2304.11056v2
- Date: Sat, 27 May 2023 18:06:54 GMT
- Title: PowerGAN: A Machine Learning Approach for Power Side-Channel Attack on
Compute-in-Memory Accelerators
- Authors: Ziyu Wang, Yuting Wu, Yongmo Park, Sangmin Yoo, Xinxin Wang, Jason K.
Eshraghian, and Wei D. Lu
- Abstract summary: We demonstrate a machine learning-based attack approach using a generative adversarial network (GAN) to enhance the data reconstruction.
Our results show that the attack methodology is effective in reconstructing user inputs from analog CIM accelerator power leakage.
Our study highlights a potential security vulnerability in analog CIM accelerators and raises awareness of using GAN to breach user privacy in such systems.
- Score: 10.592555190999537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analog compute-in-memory (CIM) systems are promising for deep neural network
(DNN) inference acceleration due to their energy efficiency and high
throughput. However, as the use of DNNs expands, protecting user input privacy
has become increasingly important. In this paper, we identify a potential
security vulnerability wherein an adversary can reconstruct the user's private
input data from a power side-channel attack, under proper data acquisition and
pre-processing, even without knowledge of the DNN model. We further demonstrate
a machine learning-based attack approach using a generative adversarial network
(GAN) to enhance the data reconstruction. Our results show that the attack
methodology is effective in reconstructing user inputs from analog CIM
accelerator power leakage, even at large noise levels and after countermeasures
are applied. Specifically, we demonstrate the efficacy of our approach on an
example of U-Net inference chip for brain tumor detection, and show the
original magnetic resonance imaging (MRI) medical images can be successfully
reconstructed even at a noise-level of 20% standard deviation of the maximum
power signal value. Our study highlights a potential security vulnerability in
analog CIM accelerators and raises awareness of using GAN to breach user
privacy in such systems.
Related papers
- Enhanced Convolution Neural Network with Optimized Pooling and Hyperparameter Tuning for Network Intrusion Detection [0.0]
We propose an Enhanced Convolutional Neural Network (EnCNN) for Network Intrusion Detection Systems (NIDS)
We compare EnCNN with various machine learning algorithms, including Logistic Regression, Decision Trees, Support Vector Machines (SVM), and ensemble methods like Random Forest, AdaBoost, and Voting Ensemble.
The results show that EnCNN significantly improves detection accuracy, with a notable 10% increase over state-of-art approaches.
arXiv Detail & Related papers (2024-09-27T11:20:20Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - Output Feedback Tube MPC-Guided Data Augmentation for Robust, Efficient
Sensorimotor Policy Learning [49.05174527668836]
Imitation learning (IL) can generate computationally efficient sensorimotor policies from demonstrations provided by computationally expensive model-based sensing and control algorithms.
In this work, we combine IL with an output feedback robust tube model predictive controller to co-generate demonstrations and a data augmentation strategy to efficiently learn neural network-based sensorimotor policies.
We numerically demonstrate that our method can learn a robust visuomotor policy from a single demonstration--a two-orders of magnitude improvement in demonstration efficiency compared to existing IL methods.
arXiv Detail & Related papers (2022-10-18T19:59:17Z) - Boosting Adversarial Robustness From The Perspective of Effective Margin
Regularization [58.641705224371876]
The adversarial vulnerability of deep neural networks (DNNs) has been actively investigated in the past several years.
This paper investigates the scale-variant property of cross-entropy loss, which is the most commonly used loss function in classification tasks.
We show that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training.
arXiv Detail & Related papers (2022-10-11T03:16:56Z) - Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural
Networks with Power Consumption Information [0.0]
Adversarial attacks on state-of-the-art machine learning models pose a significant threat to the safety and security of mission-critical autonomous systems.
This paper considers the additional vulnerability of machine learning models when attackers can measure the power consumption of their underlying hardware platform.
arXiv Detail & Related papers (2022-07-06T15:56:30Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Towards a Safety Case for Hardware Fault Tolerance in Convolutional
Neural Networks Using Activation Range Supervision [1.7968112116887602]
Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications.
We build a prototypical safety case for CNNs by demonstrating that range supervision represents a highly reliable fault detector.
We explore novel, non-uniform range restriction methods that effectively suppress the probability of silent data corruptions and uncorrectable errors.
arXiv Detail & Related papers (2021-08-16T11:13:55Z) - Learning-Based Vulnerability Analysis of Cyber-Physical Systems [10.066594071800337]
This work focuses on the use of deep learning for vulnerability analysis of cyber-physical systems.
We consider a control architecture widely used in CPS (e.g., robotics) where the low-level control is based on e.g., the extended Kalman filter (EKF) and an anomaly detector.
To facilitate analyzing the impact potential sensing attacks could have, our objective is to develop learning-enabled attack generators.
arXiv Detail & Related papers (2021-03-10T06:52:26Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection
in Neural Networks [3.125321230840342]
Adversarial examples are inputs that have been carefully perturbed to fool classifier networks, while appearing unchanged to humans.
We propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.
We show that our method improves state-of-the-art detector robustness against adversarial examples.
arXiv Detail & Related papers (2021-01-05T14:31:53Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.