An Empirical Review of Adversarial Defenses
- URL: http://arxiv.org/abs/2012.06332v1
- Date: Thu, 10 Dec 2020 09:34:41 GMT
- Title: An Empirical Review of Adversarial Defenses
- Authors: Ayush Goel
- Abstract summary: Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
- Score: 0.913755431537592
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: From face recognition systems installed in phones to self-driving cars, the
field of AI is witnessing rapid transformations and is being integrated into
our everyday lives at an incredible pace. Any major failure in these system's
predictions could be devastating, leaking sensitive information or even costing
lives (as in the case of self-driving cars). However, deep neural networks,
which form the basis of such systems, are highly susceptible to a specific type
of attack, called adversarial attacks. A hacker can, even with bare minimum
computation, generate adversarial examples (images or data points that belong
to another class, but consistently fool the model to get misclassified as
genuine) and crumble the basis of such algorithms. In this paper, we compile
and test numerous approaches to defend against such adversarial attacks. Out of
the ones explored, we found two effective techniques, namely Dropout and
Denoising Autoencoders, and show their success in preventing such attacks from
fooling the model. We demonstrate that these techniques are also resistant to
both higher noise levels as well as different kinds of adversarial attacks
(although not tested against all). We also develop a framework for deciding the
suitable defense technique to use against attacks, based on the nature of the
application and resource constraints of the Deep Neural Network.
Related papers
- When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks [0.0]
The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
arXiv Detail & Related papers (2022-03-11T10:58:24Z) - Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication [6.961253535504979]
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
arXiv Detail & Related papers (2021-10-03T00:15:50Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - An Analysis of Adversarial Attacks and Defenses on Autonomous Driving
Models [15.007794089091616]
Convolutional neural network (CNN) is a key component in autonomous driving.
Previous work shows CNN-based classification models are vulnerable to adversarial attacks.
This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models.
arXiv Detail & Related papers (2020-02-06T09:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.