Continual Adversarial Defense
- URL: http://arxiv.org/abs/2312.09481v2
- Date: Wed, 13 Mar 2024 15:24:19 GMT
- Title: Continual Adversarial Defense
- Authors: Qian Wang, Yaoyao Liu, Hefei Ling, Yingwei Li, Qihao Liu, Ping Li,
Jiazhong Chen, Alan Yuille, Ning Yu
- Abstract summary: We propose the first continual adversarial defense framework that adapts to any attacks in a dynamic scenario.
In practice, CAD is modeled under four principles: (1) continual adaptation to new attacks without catastrophic forgetting, (2) few-shot adaptation, (3) memory-efficient adaptation, and (4) high accuracy on both clean and adversarial images.
- Score: 38.77563936937233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In response to the rapidly evolving nature of adversarial attacks against
visual classifiers on a monthly basis, numerous defenses have been proposed to
generalize against as many known attacks as possible. However, designing a
defense method that generalizes to all types of attacks is not realistic
because the environment in which defense systems operate is dynamic and
comprises various unique attacks that emerge as time goes on. The defense
system must gather online few-shot defense feedback to promptly enhance itself,
leveraging efficient memory utilization. Therefore, we propose the first
continual adversarial defense (CAD) framework that adapts to any attacks in a
dynamic scenario, where various attacks emerge stage by stage. In practice, CAD
is modeled under four principles: (1) continual adaptation to new attacks
without catastrophic forgetting, (2) few-shot adaptation, (3) memory-efficient
adaptation, and (4) high accuracy on both clean and adversarial images. We
explore and integrate cutting-edge continual learning, few-shot learning, and
ensemble learning techniques to qualify the principles. Experiments conducted
on CIFAR-10 and ImageNet-100 validate the effectiveness of our approach against
multiple stages of modern adversarial attacks and demonstrate significant
improvements over numerous baseline methods. In particular, CAD is capable of
quickly adapting with minimal feedback and a low cost of defense failure, while
maintaining good performance against previous attacks. Our research sheds light
on a brand-new paradigm for continual defense adaptation against dynamic and
evolving attacks.
Related papers
- Towards Efficient Transferable Preemptive Adversarial Defense [13.252842556505174]
Deep learning technology has become untrustworthy because of its sensitivity to perturbations.
We have devised a strategy for "attacking" the message before it is attacked.
With the running of only three steps, our Fast Preemption framework outperforms benchmark training-time, test-time, and preemptive adversarial defenses.
arXiv Detail & Related papers (2024-07-22T10:23:44Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches [37.317604316147985]
The vulnerability of deep neural networks to adversarial patches has motivated numerous defense strategies for boosting model robustness.
We develop Embodied Active Defense (EAD), a proactive defensive strategy that actively contextualizes environmental information to address misaligned adversarial patches in 3D real-world settings.
arXiv Detail & Related papers (2024-03-31T03:02:35Z) - Versatile Defense Against Adversarial Attacks on Image Recognition [2.9980620769521513]
Defending against adversarial attacks in a real-life setting can be compared to the way antivirus software works.
It appears that a defense method based on image-to-image translation may be capable of this.
The trained model has successfully improved the classification accuracy from nearly zero to an average of 86%.
arXiv Detail & Related papers (2024-03-13T01:48:01Z) - Deep Reinforcement Learning for Cyber System Defense under Dynamic
Adversarial Uncertainties [5.78419291062552]
We propose a data-driven deep reinforcement learning framework to learn proactive, context-aware defense countermeasures.
A dynamic defense optimization problem is formulated with multiple protective postures against different types of adversaries.
arXiv Detail & Related papers (2023-02-03T08:33:33Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.