Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks
- URL: http://arxiv.org/abs/2008.01219v1
- Date: Mon, 3 Aug 2020 21:55:41 GMT
- Title: Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks
- Authors: Haoqiang Guo, Lu Peng, Jian Zhang, Fang Qi, Lide Duan
- Abstract summary: A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
- Score: 7.20382137043754
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent studies identify that Deep learning Neural Networks (DNNs) are
vulnerable to subtle perturbations, which are not perceptible to human visual
system but can fool the DNN models and lead to wrong outputs. A class of
adversarial attack network algorithms has been proposed to generate robust
physical perturbations under different circumstances. These algorithms are the
first efforts to move forward secure deep learning by providing an avenue to
train future defense networks, however, the intrinsic complexity of them
prevents their broader usage.
In this paper, we propose the first hardware accelerator for adversarial
attacks based on memristor crossbar arrays. Our design significantly improves
the throughput of a visual adversarial perturbation system, which can further
improve the robustness and security of future deep learning systems. Based on
the algorithm uniqueness, we propose four implementations for the adversarial
attack accelerator ($A^3$) to improve the throughput, energy efficiency, and
computational efficiency.
Related papers
- Deep Learning Algorithms Used in Intrusion Detection Systems -- A Review [0.0]
This review paper studies recent advancements in the application of deep learning techniques, including CNN, Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Deep Neural Networks (DNN), Long Short-Term Memory (LSTM), autoencoders (AE), Multi-Layer Perceptrons (MLP), Self-Normalizing Networks (SNN) and hybrid models, within network intrusion detection systems.
arXiv Detail & Related papers (2024-02-26T20:57:35Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks [0.0]
The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
arXiv Detail & Related papers (2022-03-11T10:58:24Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Evaluating Resilience of Encrypted Traffic Classification Against
Adversarial Evasion Attacks [4.243926243206826]
In this paper, we focus on investigating the effectiveness of different evasion attacks and see how resilient machine and deep learning algorithms are.
In most of our experimental results, deep learning shows better resilience against the adversarial samples in comparison to machine learning.
arXiv Detail & Related papers (2021-05-30T15:07:12Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural
Networks [3.9193443389004887]
Adrial attacks have exposed serious vulnerabilities in Deep Neural Networks (DNNs)
We propose and demonstrate sparsity attacks, which adversarial modify a DNN's inputs so as to reduce sparsity in its internal activation values.
We launch both white-box and black-box versions of adversarial sparsity attacks and demonstrate that they decrease activation sparsity by up to 1.82x.
arXiv Detail & Related papers (2020-06-14T21:02:55Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.