Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
- URL: http://arxiv.org/abs/2202.13711v1
- Date: Mon, 28 Feb 2022 12:11:40 GMT
- Title: Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
- Authors: Francesco Croce, Sven Gowal, Thomas Brunner, Evan Shelhamer, Matthias
Hein, Taylan Cemgil
- Abstract summary: We categorize such adaptive testtime defenses and explain their potential benefits and drawbacks.
Unfortunately, none significantly improve upon static models when evaluated appropriately.
Some even weaken the underlying static model while simultaneously increasing inference cost.
- Score: 60.55448652445904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adaptive defenses that use test-time optimization promise to improve
robustness to adversarial examples. We categorize such adaptive test-time
defenses and explain their potential benefits and drawbacks. In the process, we
evaluate some of the latest proposed adaptive defenses (most of them published
at peer-reviewed conferences). Unfortunately, none significantly improve upon
static models when evaluated appropriately. Some even weaken the underlying
static model while simultaneously increasing inference cost. While these
results are disappointing, we still believe that adaptive test-time defenses
are a promising avenue of research and, as such, we provide recommendations on
evaluating such defenses. We go beyond the checklist provided by Carlini et al.
(2019) by providing concrete steps that are specific to this type of defense.
Related papers
- Closing the Gap: Achieving Better Accuracy-Robustness Tradeoffs against Query-Based Attacks [1.54994260281059]
We show how to efficiently establish, at test-time, a solid tradeoff between robustness and accuracy when mitigating query-based attacks.
Our approach is independent of training and supported by theory.
arXiv Detail & Related papers (2023-12-15T17:02:19Z) - Randomness in ML Defenses Helps Persistent Attackers and Hinders
Evaluators [49.52538232104449]
It is becoming increasingly imperative to design robust ML defenses.
Recent work has found that many defenses that initially resist state-of-the-art attacks can be broken by an adaptive adversary.
We take steps to simplify the design of defenses and argue that white-box defenses should eschew randomness when possible.
arXiv Detail & Related papers (2023-02-27T01:33:31Z) - Increasing Confidence in Adversarial Robustness Evaluations [53.2174171468716]
We propose a test to identify weak attacks and thus weak defense evaluations.
Our test slightly modifies a neural network to guarantee the existence of an adversarial example for every sample.
For eleven out of thirteen previously-published defenses, the original evaluation of the defense fails our test, while stronger attacks that break these defenses pass it.
arXiv Detail & Related papers (2022-06-28T13:28:13Z) - Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack [96.50202709922698]
A practical evaluation method should be convenient (i.e., parameter-free), efficient (i.e., fewer iterations) and reliable.
We propose a parameter-free Adaptive Auto Attack (A$3$) evaluation method which addresses the efficiency and reliability in a test-time-training fashion.
arXiv Detail & Related papers (2022-03-10T04:53:54Z) - A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses [4.94950858749529]
We propose a game-theoretic framework for studying attacks and defenses which exist in equilibrium.
We show how this equilibrium defense can be approximated given finitely many samples from a data-generating distribution.
arXiv Detail & Related papers (2020-09-14T15:51:15Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z) - On Adaptive Attacks to Adversarial Example Defenses [123.32678153377915]
This paper lays out the methodology and the approach necessary to perform an adaptive attack against defenses to adversarial examples.
We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples.
arXiv Detail & Related papers (2020-02-19T18:50:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.