Understanding and Improving Ensemble Adversarial Defense
- URL: http://arxiv.org/abs/2310.18477v2
- Date: Thu, 2 Nov 2023 11:57:18 GMT
- Title: Understanding and Improving Ensemble Adversarial Defense
- Authors: Yian Deng, Tingting Mu
- Abstract summary: We develop a new error theory dedicated to understanding ensemble adversarial defense.
We propose an effective approach to improve ensemble adversarial defense, named interactive global adversarial training (iGAT)
iGAT is capable of boosting their performance by increases up to 17% evaluated using CIFAR10 and CIFAR100 datasets under both white-box and black-box attacks.
- Score: 4.504026914523449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The strategy of ensemble has become popular in adversarial defense, which
trains multiple base classifiers to defend against adversarial attacks in a
cooperative manner. Despite the empirical success, theoretical explanations on
why an ensemble of adversarially trained classifiers is more robust than single
ones remain unclear. To fill in this gap, we develop a new error theory
dedicated to understanding ensemble adversarial defense, demonstrating a
provable 0-1 loss reduction on challenging sample sets in an adversarial
defense scenario. Guided by this theory, we propose an effective approach to
improve ensemble adversarial defense, named interactive global adversarial
training (iGAT). The proposal includes (1) a probabilistic distributing rule
that selectively allocates to different base classifiers adversarial examples
that are globally challenging to the ensemble, and (2) a regularization term to
rescue the severest weaknesses of the base classifiers. Being tested over
various existing ensemble adversarial defense techniques, iGAT is capable of
boosting their performance by increases up to 17% evaluated using CIFAR10 and
CIFAR100 datasets under both white-box and black-box attacks.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Improving Adversarial Robustness with Self-Paced Hard-Class Pair
Reweighting [5.084323778393556]
adversarial training with untargeted attacks is one of the most recognized methods.
We find that the naturally imbalanced inter-class semantic similarity makes those hard-class pairs to become the virtual targets of each other.
We propose to upweight hard-class pair loss in model optimization, which prompts learning discriminative features from hard classes.
arXiv Detail & Related papers (2022-10-26T22:51:36Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial
Attacks via Pairwise Adversarially Robust Loss Function [13.417003144007156]
adversarial attacks tend to rely on the principle of transferability.
Ensemble methods against adversarial attacks demonstrate that an adversarial example is less likely to mislead multiple classifiers.
Recent ensemble methods have either been shown to be vulnerable to stronger adversaries or shown to lack an end-to-end evaluation.
arXiv Detail & Related papers (2021-12-09T14:26:13Z) - Saliency Diversified Deep Ensemble for Robustness to Adversaries [1.9659095632676094]
This work proposes a novel diversity-promoting learning approach for the deep ensembles.
The idea is to promote saliency map diversity (SMD) on ensemble members to prevent the attacker from targeting all ensemble members at once.
We empirically show a reduced transferability between ensemble members and improved performance compared to the state-of-the-art ensemble defense.
arXiv Detail & Related papers (2021-12-07T10:18:43Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z) - Harnessing adversarial examples with a surprisingly simple defense [47.64219291655723]
I introduce a very simple method to defend against adversarial examples.
The basic idea is to raise the slope of the ReLU function at the test time.
Experiments over MNIST and CIFAR-10 datasets demonstrate the effectiveness of the proposed defense.
arXiv Detail & Related papers (2020-04-26T03:09:42Z) - Defensive Few-shot Learning [77.82113573388133]
This paper investigates a new challenging problem called defensive few-shot learning.
It aims to learn a robust few-shot model against adversarial attacks.
The proposed framework can effectively make the existing few-shot models robust against adversarial attacks.
arXiv Detail & Related papers (2019-11-16T05:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.