Position Paper: Beyond Robustness Against Single Attack Types
- URL: http://arxiv.org/abs/2405.01349v1
- Date: Thu, 2 May 2024 14:58:44 GMT
- Title: Position Paper: Beyond Robustness Against Single Attack Types
- Authors: Sihui Dai, Chong Xiang, Tong Wu, Prateek Mittal,
- Abstract summary: Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type.
The space of possible perturbations is much larger and currently cannot be modeled by a single attack type.
We draw attention to three potential directions involving robustness against multiple attacks: simultaneous multiattack robustness, unforeseen attack robustness, and continual adaptive robustness.
- Score: 42.09231029292568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type such as $\ell_2$ or $\ell_{\infty}$-bounded attacks. However, the space of possible perturbations is much larger and currently cannot be modeled by a single attack type. The discrepancy between the focus of current defenses and the space of attacks of interest calls to question the practicality of existing defenses and the reliability of their evaluation. In this position paper, we argue that the research community should look beyond single attack robustness, and we draw attention to three potential directions involving robustness against multiple attacks: simultaneous multiattack robustness, unforeseen attack robustness, and a newly defined problem setting which we call continual adaptive robustness. We provide a unified framework which rigorously defines these problem settings, synthesize existing research in these fields, and outline open directions. We hope that our position paper inspires more research in simultaneous multiattack, unforeseen attack, and continual adaptive robustness.
Related papers
- Fast Preemption: Forward-Backward Cascade Learning for Efficient and Transferable Preemptive Adversarial Defense [13.252842556505174]
Fast Preemption is a novel preemptive adversarial defense that overcomes efficiency challenges while achieving state-of-the-art robustness and transferability.
Executing in just three iterations, Fast Preemption outperforms existing training-time, test-time, and preemptive defenses.
arXiv Detail & Related papers (2024-07-22T10:23:44Z) - Adversarial Markov Games: On Adaptive Decision-Based Attacks and Defenses [21.759075171536388]
We show how attacks but also defenses can benefit by it and by learning from each other through interaction.
We demonstrate that active defenses, which control how the system responds, are a necessary complement to model hardening when facing decision-based attacks.
We lay out effective strategies in ensuring the robustness of ML-based systems deployed in the real-world.
arXiv Detail & Related papers (2023-12-20T21:24:52Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Continual Adversarial Defense [37.37029638528458]
A defense system continuously collects adversarial data online to quickly improve itself.
Continual adaptation to new attacks without catastrophic forgetting, few-shot adaptation, memory-efficient adaptation, and high accuracy on both clean and adversarial data.
In particular, CAD is capable of quickly adapting with minimal budget and a low cost of defense failure while maintaining good performance against previous attacks.
arXiv Detail & Related papers (2023-12-15T01:38:26Z) - Randomness in ML Defenses Helps Persistent Attackers and Hinders
Evaluators [49.52538232104449]
It is becoming increasingly imperative to design robust ML defenses.
Recent work has found that many defenses that initially resist state-of-the-art attacks can be broken by an adaptive adversary.
We take steps to simplify the design of defenses and argue that white-box defenses should eschew randomness when possible.
arXiv Detail & Related papers (2023-02-27T01:33:31Z) - Can Adversarial Training Be Manipulated By Non-Robust Features? [64.73107315313251]
Adversarial training, originally designed to resist test-time adversarial examples, has shown to be promising in mitigating training-time availability attacks.
We identify a novel threat model named stability attacks, which aims to hinder robust availability by slightly perturbing the training data.
Under this threat, we find that adversarial training using a conventional defense budget $epsilon$ provably fails to provide test robustness in a simple statistical setting.
arXiv Detail & Related papers (2022-01-31T16:25:25Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Automated Discovery of Adaptive Attacks on Adversarial Defenses [14.633898825111826]
We present a framework that automatically discovers an effective attack on a given model with an unknown defense.
We show it outperforms AutoAttack, the current state-of-the-art tool for reliable evaluation of adversarial defenses.
arXiv Detail & Related papers (2021-02-23T18:43:24Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.