A Mask-Based Adversarial Defense Scheme
- URL: http://arxiv.org/abs/2204.11837v1
- Date: Thu, 21 Apr 2022 12:55:27 GMT
- Title: A Mask-Based Adversarial Defense Scheme
- Authors: Weizhen Xu, Chenyi Zhang, Fangzhen Zhao, Liangda Fang
- Abstract summary: Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs)
We propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
- Score: 3.759725391906588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks hamper the functionality and accuracy of Deep Neural
Networks (DNNs) by meddling with subtle perturbations to their inputs.In this
work, we propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to
mitigate the negative effect from adversarial attacks. To be precise, our
method promotes the robustness of a DNN by randomly masking a portion of
potential adversarial images, and as a result, the %classification result
output of the DNN becomes more tolerant to minor input perturbations. Compared
with existing adversarial defense techniques, our method does not need any
additional denoising structure, nor any change to a DNN's design. We have
tested this approach on a collection of DNN models for a variety of data sets,
and the experimental results confirm that the proposed method can effectively
improve the defense abilities of the DNNs against all of the tested adversarial
attack methods. In certain scenarios, the DNN models trained with MAD have
improved classification accuracy by as much as 20% to 90% compared to the
original models that are given adversarial inputs.
Related papers
- Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM [5.592360872268223]
Defense strategies usually train deep neural networks (DNNs) for a specific adversarial attack method and can achieve good robustness in defense against this type of adversarial attack.
However, when subjected to evaluations involving unfamiliar attack modalities, empirical evidence reveals a pronounced deterioration in the robustness of DNNs.
Most defense methods often sacrifice the accuracy of clean examples in order to improve the adversarial robustness of DNNs.
arXiv Detail & Related papers (2024-03-18T03:54:01Z) - Robustness Against Adversarial Attacks via Learning Confined Adversarial
Polytopes [0.0]
Deep neural networks (DNNs) could be deceived by generating human-imperceptible perturbations of clean samples.
In this paper, we aim to train robust DNNs by limiting the set of outputs reachable via a norm-bounded perturbation added to a clean sample.
arXiv Detail & Related papers (2024-01-15T22:31:15Z) - Improving the Robustness of Quantized Deep Neural Networks to White-Box
Attacks using Stochastic Quantization and Information-Theoretic Ensemble
Training [1.6098666134798774]
Most real-world applications that employ deep neural networks (DNNs) quantize them to low precision to reduce the compute needs.
We present a method to improve the robustness of quantized DNNs to white-box adversarial attacks.
arXiv Detail & Related papers (2023-11-30T17:15:58Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Is Approximation Universally Defensive Against Adversarial Attacks in
Deep Neural Networks? [0.0]
We present an adversarial analysis of different approximate DNN accelerators (AxDNNs) using the state-of-the-art approximate multipliers.
Our results demonstrate that adversarial attacks on AxDNNs can cause 53% accuracy loss whereas the same attack may lead to almost no accuracy loss.
arXiv Detail & Related papers (2021-12-02T19:01:36Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - KATANA: Simple Post-Training Robustness Using Test Time Augmentations [49.28906786793494]
A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks.
We propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights.
Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification.
arXiv Detail & Related papers (2021-09-16T19:16:00Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Black-box Adversarial Attacks on Monocular Depth Estimation Using
Evolutionary Multi-objective Optimization [0.0]
This paper proposes an adversarial attack method to deep neural networks (DNNs) for monocular depth estimation, i.e., estimating the depth from a single image.
arXiv Detail & Related papers (2020-12-29T14:01:11Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.