Attacking Deep Learning AI Hardware with Universal Adversarial
Perturbation
- URL: http://arxiv.org/abs/2111.09488v1
- Date: Thu, 18 Nov 2021 02:54:10 GMT
- Title: Attacking Deep Learning AI Hardware with Universal Adversarial
Perturbation
- Authors: Mehdi Sadi, B. M. S. Bahar Talukder, Kaniz Mishty, and Md Tauhidur
Rahman
- Abstract summary: Universal Adversarial Perturbations can seriously jeopardize the security and integrity of practical Deep Learning applications.
We demonstrate an attack strategy that when activated by rogue means (e.g., malware, trojan) can bypass existing countermeasures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Universal Adversarial Perturbations are image-agnostic and model-independent
noise that when added with any image can mislead the trained Deep Convolutional
Neural Networks into the wrong prediction. Since these Universal Adversarial
Perturbations can seriously jeopardize the security and integrity of practical
Deep Learning applications, existing techniques use additional neural networks
to detect the existence of these noises at the input image source. In this
paper, we demonstrate an attack strategy that when activated by rogue means
(e.g., malware, trojan) can bypass these existing countermeasures by augmenting
the adversarial noise at the AI hardware accelerator stage. We demonstrate the
accelerator-level universal adversarial noise attack on several deep Learning
models using co-simulation of the software kernel of Conv2D function and the
Verilog RTL model of the hardware under the FuseSoC environment.
Related papers
- Defense against ML-based Power Side-channel Attacks on DNN Accelerators with Adversarial Attacks [21.611341074006162]
We present AIAShield, a novel defense methodology to safeguard FPGA-based AI accelerators.
We leverage the prominent adversarial attack technique from the machine learning community to craft delicate noise.
AIAShield outperforms existing solutions with excellent transferability.
arXiv Detail & Related papers (2023-12-07T04:38:01Z) - NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial
Attacks [21.86821880164293]
adversarial attacks can easily mislead a neural network and lead to wrong decisions.
In this paper, we use the gradient class activation map (GradCAM) to analyze the behavior deviation of the VGG-16 network.
We also propose a novel NoiseCAM algorithm that integrates information from globally and pixel-level weighted class activation maps.
arXiv Detail & Related papers (2023-03-09T22:07:41Z) - A Streamlit-based Artificial Intelligence Trust Platform for
Next-Generation Wireless Networks [0.0]
This paper proposes an AI trust platform using Streamlit for NextG networks.
It allows researchers to evaluate, defend, certify, and verify their AI models and applications against adversarial threats.
arXiv Detail & Related papers (2022-10-25T05:26:30Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks [0.0]
The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
arXiv Detail & Related papers (2022-03-11T10:58:24Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks [7.20382137043754]
A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
arXiv Detail & Related papers (2020-08-03T21:55:41Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.