Cooperation or Competition: Avoiding Player Domination for Multi-Target
Robustness via Adaptive Budgets
- URL: http://arxiv.org/abs/2306.15482v1
- Date: Tue, 27 Jun 2023 14:02:10 GMT
- Title: Cooperation or Competition: Avoiding Player Domination for Multi-Target
Robustness via Adaptive Budgets
- Authors: Yimu Wang, Dinghuai Zhang, Yihan Wu, Heng Huang, Hongyang Zhang
- Abstract summary: We view adversarial attacks as a bargaining game in which different players negotiate to reach an agreement on a joint direction of parameter updating.
We design a novel framework that adjusts the budgets of different adversaries to avoid any player dominance.
Experiments on standard benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
- Score: 76.20705291443208
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite incredible advances, deep learning has been shown to be susceptible
to adversarial attacks. Numerous approaches have been proposed to train robust
networks both empirically and certifiably. However, most of them defend against
only a single type of attack, while recent work takes steps forward in
defending against multiple attacks. In this paper, to understand multi-target
robustness, we view this problem as a bargaining game in which different
players (adversaries) negotiate to reach an agreement on a joint direction of
parameter updating. We identify a phenomenon named player domination in the
bargaining game, namely that the existing max-based approaches, such as MAX and
MSD, do not converge. Based on our theoretical analysis, we design a novel
framework that adjusts the budgets of different adversaries to avoid any player
dominance. Experiments on standard benchmarks show that employing the proposed
framework to the existing approaches significantly advances multi-target
robustness.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Adversarial Attacks on Cooperative Multi-agent Bandits [41.79235070291252]
We study adversarial attacks on CMA2B in both homogeneous and heterogeneous settings.
In the homogeneous setting, we propose attack strategies that convince all agents to select a particular target arm $T-o(T)$ times while incurring $o(T)$ attack costs in $T$ rounds.
In the heterogeneous setting, we prove that a target arm attack requires linear attack costs and propose attack strategies that can force a maximum number of agents to suffer linear regrets while incurring sublinear costs and manipulating only the observations of a few target agents.
arXiv Detail & Related papers (2023-11-03T04:03:19Z) - Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation
Robustness via Hypernetworks [47.21491911505409]
Adrial training serves as one of the most popular and effective methods to defend against adversarial perturbations.
We propose a novel multi-perturbation adversarial training framework, parameter-saving adversarial training (PSAT), to reinforce multi-perturbation robustness.
arXiv Detail & Related papers (2023-09-28T07:16:02Z) - Robust multi-agent coordination via evolutionary generation of auxiliary
adversarial attackers [23.15190337027283]
We propose Robust Multi-Agent Coordination via Generation of Auxiliary Adversarial Attackers (ROMANCE)
ROMANCE enables the trained policy to encounter diversified and strong auxiliary adversarial attacks during training, thus achieving high robustness under various policy perturbations.
The goal of quality is to minimize the ego-system coordination effect, and a novel diversity regularizer is applied to diversify the behaviors among attackers.
arXiv Detail & Related papers (2023-05-10T05:29:47Z) - Dynamic Stochastic Ensemble with Adversarial Robust Lottery Ticket
Subnetworks [4.665836414515929]
Adrial attacks are considered the vulnerability of CNNs.
Dynamic Defense Framework (DDF) recently changed the passive safety status quo based on the ensemble model.
We propose a method to realize the dynamic ensemble defense strategy.
arXiv Detail & Related papers (2022-10-06T00:33:19Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z) - AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks [81.45930614122925]
Deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars.
Conventional defense methods, although shown to be promising, are largely limited by their single-source single-cost nature.
We show that the multi-source nature of AdvMS mitigates the performance plateauing issue and the multi-cost nature enables improving robustness at a flexible and adjustable combination of costs.
arXiv Detail & Related papers (2020-02-19T20:46:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.