AROID: Improving Adversarial Robustness through Online Instance-wise
Data Augmentation
- URL: http://arxiv.org/abs/2306.07197v1
- Date: Mon, 12 Jun 2023 15:54:52 GMT
- Title: AROID: Improving Adversarial Robustness through Online Instance-wise
Data Augmentation
- Authors: Lin Li, Jianing Qiu, Michael Spratling
- Abstract summary: Adversarial training (AT) is an effective defense against adversarial examples.
AT is prone to overfitting which degrades robustness substantially.
This work proposes a new method to automatically learn online, instance-wise, DA policies to improve robust generalization for AT.
- Score: 7.12940198032571
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks are vulnerable to adversarial examples. Adversarial
training (AT) is an effective defense against adversarial examples. However, AT
is prone to overfitting which degrades robustness substantially. Recently, data
augmentation (DA) was shown to be effective in mitigating robust overfitting if
appropriately designed and optimized for AT. This work proposes a new method to
automatically learn online, instance-wise, DA policies to improve robust
generalization for AT. A novel policy learning objective, consisting of
Vulnerability, Affinity and Diversity, is proposed and shown to be sufficiently
effective and efficient to be practical for automatic DA generation during AT.
This allows our method to efficiently explore a large search space for a more
effective DA policy and evolve the policy as training progresses. Empirically,
our method is shown to outperform or match all competitive DA methods across
various model architectures (CNNs and ViTs) and datasets (CIFAR10, SVHN and
Imagenette). Our DA policy reinforced vanilla AT to surpass several
state-of-the-art AT methods (with baseline DA) in terms of both accuracy and
robustness. It can also be combined with those advanced AT methods to produce a
further boost in robustness.
Related papers
- Adaptive Batch Normalization Networks for Adversarial Robustness [33.14617293166724]
Adversarial Training (AT) has been a standard foundation of modern adversarial defense approaches.
We propose adaptive Batch Normalization Network (ABNN), inspired by the recent advances in test-time domain adaptation.
ABNN consistently improves adversarial robustness against both digital and physically realizable attacks.
arXiv Detail & Related papers (2024-05-20T00:58:53Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - A Unified Wasserstein Distributional Robustness Framework for
Adversarial Training [24.411703133156394]
This paper presents a unified framework that connects Wasserstein distributional robustness with current state-of-the-art AT methods.
We introduce a new Wasserstein cost function and a new series of risk functions, with which we show that standard AT methods are special cases of their counterparts in our framework.
This connection leads to an intuitive relaxation and generalization of existing AT methods and facilitates the development of a new family of distributional robustness AT-based algorithms.
arXiv Detail & Related papers (2022-02-27T19:40:29Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Consistency Regularization for Adversarial Robustness [88.65786118562005]
Adversarial training is one of the most successful methods to obtain the adversarial robustness of deep neural networks.
However, a significant generalization gap in the robustness obtained from AT has been problematic.
In this paper, we investigate data augmentation techniques to address the issue.
arXiv Detail & Related papers (2021-03-08T09:21:41Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.