Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural
Networks with Power Consumption Information
- URL: http://arxiv.org/abs/2207.02764v1
- Date: Wed, 6 Jul 2022 15:56:30 GMT
- Title: Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural
Networks with Power Consumption Information
- Authors: Cory Merkel
- Abstract summary: Adversarial attacks on state-of-the-art machine learning models pose a significant threat to the safety and security of mission-critical autonomous systems.
This paper considers the additional vulnerability of machine learning models when attackers can measure the power consumption of their underlying hardware platform.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks on state-of-the-art machine learning models pose a
significant threat to the safety and security of mission-critical autonomous
systems. This paper considers the additional vulnerability of machine learning
models when attackers can measure the power consumption of their underlying
hardware platform. In particular, we explore the utility of power consumption
information for adversarial attacks on non-volatile memory crossbar-based
single-layer neural networks. Our results from experiments with MNIST and
CIFAR-10 datasets show that power consumption can reveal important information
about the neural network's weight matrix, such as the 1-norm of its columns.
That information can be used to infer the sensitivity of the network's loss
with respect to different inputs. We also find that surrogate-based black box
attacks that utilize crossbar power information can lead to improved attack
efficiency.
Related papers
- Discovery of False Data Injection Schemes on Frequency Controllers with Reinforcement Learning [7.540446548202259]
inverter-based distributed energy resources (DERs) play a crucial role in integrating renewable energy into the power system.
We propose to employ reinforcement learning to identify potential threats and system vulnerabilities.
arXiv Detail & Related papers (2024-08-30T01:09:32Z) - Investigation of Multi-stage Attack and Defense Simulation for Data Synthesis [2.479074862022315]
This study proposes a model for generating synthetic data of multi-stage cyber attacks in the power grid.
It uses attack trees to model the attacker's sequence of steps and a game-theoretic approach to incorporate the defender's actions.
arXiv Detail & Related papers (2023-12-21T09:54:18Z) - PowerGAN: A Machine Learning Approach for Power Side-Channel Attack on
Compute-in-Memory Accelerators [10.592555190999537]
We demonstrate a machine learning-based attack approach using a generative adversarial network (GAN) to enhance the data reconstruction.
Our results show that the attack methodology is effective in reconstructing user inputs from analog CIM accelerator power leakage.
Our study highlights a potential security vulnerability in analog CIM accelerators and raises awareness of using GAN to breach user privacy in such systems.
arXiv Detail & Related papers (2023-04-13T18:50:33Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Sponge Examples: Energy-Latency Attacks on Neural Networks [27.797657094947017]
We introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical.
We mount two variants of this attack on established vision and language models, increasing energy consumption by a factor of 10 to 200.
Our attacks can also be used to delay decisions where a network has critical real-time performance, such as in perception for autonomous vehicles.
arXiv Detail & Related papers (2020-06-05T14:10:09Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.