A Mask-Based Adversarial Defense Scheme
- URL: http://arxiv.org/abs/2204.11837v1
- Date: Thu, 21 Apr 2022 12:55:27 GMT
- Title: A Mask-Based Adversarial Defense Scheme
- Authors: Weizhen Xu, Chenyi Zhang, Fangzhen Zhao, Liangda Fang
- Abstract summary: Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs)
We propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
- Score: 3.759725391906588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks hamper the functionality and accuracy of Deep Neural
Networks (DNNs) by meddling with subtle perturbations to their inputs.In this
work, we propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to
mitigate the negative effect from adversarial attacks. To be precise, our
method promotes the robustness of a DNN by randomly masking a portion of
potential adversarial images, and as a result, the %classification result
output of the DNN becomes more tolerant to minor input perturbations. Compared
with existing adversarial defense techniques, our method does not need any
additional denoising structure, nor any change to a DNN's design. We have
tested this approach on a collection of DNN models for a variety of data sets,
and the experimental results confirm that the proposed method can effectively
improve the defense abilities of the DNNs against all of the tested adversarial
attack methods. In certain scenarios, the DNN models trained with MAD have
improved classification accuracy by as much as 20% to 90% compared to the
original models that are given adversarial inputs.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.