Randomization matters. How to defend against strong adversarial attacks
- URL: http://arxiv.org/abs/2002.11565v5
- Date: Wed, 6 Jan 2021 12:53:03 GMT
- Title: Randomization matters. How to defend against strong adversarial attacks
- Authors: Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal
Atif
- Abstract summary: We show that adversarial attacks and defenses form an infinite zero-sum game where classical results do not apply.
We show that our defense method considerably outperforms Adversarial Training against state-of-the-art attacks.
- Score: 17.438104235331085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Is there a classifier that ensures optimal robustness against all adversarial
attacks? This paper answers this question by adopting a game-theoretic point of
view. We show that adversarial attacks and defenses form an infinite zero-sum
game where classical results (e.g. Sion theorem) do not apply. We demonstrate
the non-existence of a Nash equilibrium in our game when the classifier and the
Adversary are both deterministic, hence giving a negative answer to the above
question in the deterministic regime. Nonetheless, the question remains open in
the randomized regime. We tackle this problem by showing that, undermild
conditions on the dataset distribution, any deterministic classifier can be
outperformed by a randomized one. This gives arguments for using randomization,
and leads us to a new algorithm for building randomized classifiers that are
robust to strong adversarial attacks. Empirical results validate our
theoretical analysis, and show that our defense method considerably outperforms
Adversarial Training against state-of-the-art attacks.
Related papers
- Sequential Manipulation Against Rank Aggregation: Theory and Algorithm [119.57122943187086]
We leverage an online attack on the vulnerable data collection process.
From the game-theoretic perspective, the confrontation scenario is formulated as a distributionally robust game.
The proposed method manipulates the results of rank aggregation methods in a sequential manner.
arXiv Detail & Related papers (2024-07-02T03:31:21Z) - On the Role of Randomization in Adversarially Robust Classification [13.39932522722395]
We show that a randomized ensemble outperforms the hypothesis set in adversarial risk.
We also give an explicit description of the deterministic hypothesis set that contains such a deterministic classifier.
arXiv Detail & Related papers (2023-02-14T17:51:00Z) - Randomized Smoothing under Attack: How Good is it in Pratice? [17.323638042215013]
We first highlight the mismatch between a theoretical certification and the practice of attacks on classifiers.
We then perform attacks on randomized smoothing as a defense.
Our main observation is that there is a major mismatch in the settings of the RS for obtaining high certified robustness or when defeating black box attacks.
arXiv Detail & Related papers (2022-04-28T11:37:40Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks [81.13338949407205]
Recent works show that optimal bandit algorithms are vulnerable to adversarial attacks and can fail completely in the presence of attacks.
Existing robust bandit algorithms only work for the non-contextual setting under the attack of rewards.
We provide the first robust bandit algorithm for linear contextual bandit setting under a fully adaptive and omniscient attack.
arXiv Detail & Related papers (2021-06-05T22:20:34Z) - Mixed Nash Equilibria in the Adversarial Examples Game [18.181826693937776]
This paper tackles the problem of adversarial examples from a game theoretic point of view.
We study the open question of the existence of mixed Nash equilibria in the zero-sum game formed by the attacker and the classifier.
arXiv Detail & Related papers (2021-02-13T11:47:20Z) - A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses [4.94950858749529]
We propose a game-theoretic framework for studying attacks and defenses which exist in equilibrium.
We show how this equilibrium defense can be approximated given finitely many samples from a data-generating distribution.
arXiv Detail & Related papers (2020-09-14T15:51:15Z) - Robustness Guarantees for Mode Estimation with an Application to Bandits [131.21717367564963]
We introduce a theory for multi-armed bandits where the values are the modes of the reward distributions instead of the mean.
We show in simulations that our algorithms are robust to perturbation of the arms by adversarial noise sequences.
arXiv Detail & Related papers (2020-03-05T21:29:27Z) - Robust Stochastic Bandit Algorithms under Probabilistic Unbounded
Adversarial Attack [41.060507338755784]
This paper investigates the attack model where an adversary attacks with a certain probability at each round, and its attack value can be arbitrary and unbounded if it attacks.
We propose a novel sample median-based and exploration-aided UCB algorithm (called med-E-UCB) and a median-based $epsilon$-greedy algorithm (called med-$epsilon$-greedy)
Both algorithms are provably robust to the aforementioned attack model. More specifically we show that both algorithms achieve $mathcalO(log T)$ pseudo-regret (i.e
arXiv Detail & Related papers (2020-02-17T19:21:08Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z) - Defensive Few-shot Learning [77.82113573388133]
This paper investigates a new challenging problem called defensive few-shot learning.
It aims to learn a robust few-shot model against adversarial attacks.
The proposed framework can effectively make the existing few-shot models robust against adversarial attacks.
arXiv Detail & Related papers (2019-11-16T05:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.