An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks
- URL: http://arxiv.org/abs/2203.10930v1
- Date: Fri, 11 Mar 2022 10:58:24 GMT
- Title: An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks
- Authors: Anirudh Yadav, Ashutosh Upadhyay, S.Sharanya
- Abstract summary: The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: According to recent studies, the vulnerability of state-of-the-art Neural
Networks to adversarial input samples has increased drastically. A neural
network is an intermediate path or technique by which a computer learns to
perform tasks using Machine learning algorithms. Machine Learning and
Artificial Intelligence model has become a fundamental aspect of life, such as
self-driving cars [1], smart home devices, so any vulnerability is a
significant concern. The smallest input deviations can fool these extremely
literal systems and deceive their users as well as administrator into
precarious situations. This article proposes a defense algorithm that utilizes
the combination of an auto-encoder [3] and block-switching architecture.
Auto-coder is intended to remove any perturbations found in input images
whereas the block switching method is used to make it more robust against
White-box attacks. The attack is planned using FGSM [9] model, and the
subsequent counter-attack by the proposed architecture will take place thereby
demonstrating the feasibility and security delivered by the algorithm.
Related papers
- Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Automating Defense Against Adversarial Attacks: Discovery of
Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed
Models [0.0]
We evaluate the use of multi-spectral image arrays and ensemble learners to combat adversarial attacks.
In rough analogy to defending cyber-networks, we combine techniques from both offensive ("red team) and defensive ("blue team") approaches.
arXiv Detail & Related papers (2021-03-29T19:07:55Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - A Generative Model based Adversarial Security of Deep Learning and
Linear Classifier Models [0.0]
We have proposed a mitigation method for adversarial attacks against machine learning models with an autoencoder model.
The main idea behind adversarial attacks against machine learning models is to produce erroneous results by manipulating trained models.
We have also presented the performance of autoencoder models to various attack methods from deep neural networks to traditional algorithms.
arXiv Detail & Related papers (2020-10-17T17:18:17Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks [7.20382137043754]
A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
arXiv Detail & Related papers (2020-08-03T21:55:41Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.