Universal Adversarial Training with Class-Wise Perturbations
- URL: http://arxiv.org/abs/2104.03000v1
- Date: Wed, 7 Apr 2021 09:05:49 GMT
- Title: Universal Adversarial Training with Class-Wise Perturbations
- Authors: Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
- Abstract summary: adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
- Score: 78.05383266222285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite their overwhelming success on a wide range of applications,
convolutional neural networks (CNNs) are widely recognized to be vulnerable to
adversarial examples. This intriguing phenomenon led to a competition between
adversarial attacks and defense techniques. So far, adversarial training is the
most widely used method for defending against adversarial attacks. It has also
been extended to defend against universal adversarial perturbations (UAPs). The
SOTA universal adversarial training (UAT) method optimizes a single
perturbation for all training samples in the mini-batch. In this work, we find
that a UAP does not attack all classes equally. Inspired by this observation,
we identify it as the source of the model having unbalanced robustness. To this
end, we improve the SOTA UAT by proposing to utilize class-wise UAPs during
adversarial training. On multiple benchmark datasets, our class-wise UAT leads
superior performance for both clean accuracy and adversarial robustness against
universal attack.
Related papers
- Understanding and Improving Ensemble Adversarial Defense [4.504026914523449]
We develop a new error theory dedicated to understanding ensemble adversarial defense.
We propose an effective approach to improve ensemble adversarial defense, named interactive global adversarial training (iGAT)
iGAT is capable of boosting their performance by increases up to 17% evaluated using CIFAR10 and CIFAR100 datasets under both white-box and black-box attacks.
arXiv Detail & Related papers (2023-10-27T20:43:29Z) - Improved Adversarial Training Through Adaptive Instance-wise Loss
Smoothing [5.1024659285813785]
Adversarial training has been the most successful defense against such adversarial attacks.
We propose a new adversarial training method: Instance-adaptive Smoothness Enhanced Adversarial Training.
Our method achieves state-of-the-art robustness against $ell_infty$-norm constrained attacks.
arXiv Detail & Related papers (2023-03-24T15:41:40Z) - Improving Adversarial Robustness with Self-Paced Hard-Class Pair
Reweighting [5.084323778393556]
adversarial training with untargeted attacks is one of the most recognized methods.
We find that the naturally imbalanced inter-class semantic similarity makes those hard-class pairs to become the virtual targets of each other.
We propose to upweight hard-class pair loss in model optimization, which prompts learning discriminative features from hard classes.
arXiv Detail & Related papers (2022-10-26T22:51:36Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Meta Adversarial Training [11.731001328350985]
Adrial training is the most effective defense against image-dependent adversarial attacks.
tailoring adversarial training to universal perturbations is computationally expensive.
We present results for universal patch and universal perturbation attacks on image classification and traffic-light detection.
arXiv Detail & Related papers (2021-01-27T14:36:23Z) - Robustness May Be at Odds with Fairness: An Empirical Study on
Class-wise Accuracy [85.20742045853738]
CNNs are widely known to be vulnerable to adversarial attacks.
We propose an empirical study on the class-wise accuracy and robustness of adversarially trained models.
We find that there exists inter-class discrepancy for accuracy and robustness even when the training dataset has an equal number of samples for each class.
arXiv Detail & Related papers (2020-10-26T06:32:32Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - Adversarial Feature Desensitization [12.401175943131268]
We propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field.
Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs.
arXiv Detail & Related papers (2020-06-08T14:20:02Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.