AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks
- URL: http://arxiv.org/abs/2002.08439v1
- Date: Wed, 19 Feb 2020 20:46:54 GMT
- Title: AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks
- Authors: Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin
- Abstract summary: Deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars.
Conventional defense methods, although shown to be promising, are largely limited by their single-source single-cost nature.
We show that the multi-source nature of AdvMS mitigates the performance plateauing issue and the multi-cost nature enables improving robustness at a flexible and adjustable combination of costs.
- Score: 81.45930614122925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing effective defense against adversarial attacks is a crucial topic as
deep neural networks have been proliferated rapidly in many security-critical
domains such as malware detection and self-driving cars. Conventional defense
methods, although shown to be promising, are largely limited by their
single-source single-cost nature: The robustness promotion tends to plateau
when the defenses are made increasingly stronger while the cost tends to
amplify. In this paper, we study principles of designing multi-source and
multi-cost schemes where defense performance is boosted from multiple defending
components. Based on this motivation, we propose a multi-source and multi-cost
defense scheme, Adversarially Trained Model Switching (AdvMS), that inherits
advantages from two leading schemes: adversarial training and random model
switching. We show that the multi-source nature of AdvMS mitigates the
performance plateauing issue and the multi-cost nature enables improving
robustness at a flexible and adjustable combination of costs over different
factors which can better suit specific restrictions and needs in practice.
Related papers
- Position Paper: Beyond Robustness Against Single Attack Types [42.09231029292568]
Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type.
The space of possible perturbations is much larger and currently cannot be modeled by a single attack type.
We draw attention to three potential directions involving robustness against multiple attacks: simultaneous multiattack robustness, unforeseen attack robustness, and continual adaptive robustness.
arXiv Detail & Related papers (2024-05-02T14:58:44Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Multi-Agent Diagnostics for Robustness via Illuminated Diversity [37.38316542660311]
We present Multi-Agent Diagnostics for Robustness via Illuminated Diversity (MADRID)
MADRID generates diverse adversarial scenarios that expose strategic vulnerabilities in pre-trained multi-agent policies.
We evaluate the effectiveness of MADRID on the 11vs11 version of Google Research Football.
arXiv Detail & Related papers (2024-01-24T14:02:09Z) - Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation
Robustness via Hypernetworks [47.21491911505409]
Adrial training serves as one of the most popular and effective methods to defend against adversarial perturbations.
We propose a novel multi-perturbation adversarial training framework, parameter-saving adversarial training (PSAT), to reinforce multi-perturbation robustness.
arXiv Detail & Related papers (2023-09-28T07:16:02Z) - Cooperation or Competition: Avoiding Player Domination for Multi-Target
Robustness via Adaptive Budgets [76.20705291443208]
We view adversarial attacks as a bargaining game in which different players negotiate to reach an agreement on a joint direction of parameter updating.
We design a novel framework that adjusts the budgets of different adversaries to avoid any player dominance.
Experiments on standard benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
arXiv Detail & Related papers (2023-06-27T14:02:10Z) - Robust multi-agent coordination via evolutionary generation of auxiliary
adversarial attackers [23.15190337027283]
We propose Robust Multi-Agent Coordination via Generation of Auxiliary Adversarial Attackers (ROMANCE)
ROMANCE enables the trained policy to encounter diversified and strong auxiliary adversarial attacks during training, thus achieving high robustness under various policy perturbations.
The goal of quality is to minimize the ego-system coordination effect, and a novel diversity regularizer is applied to diversify the behaviors among attackers.
arXiv Detail & Related papers (2023-05-10T05:29:47Z) - "What's in the box?!": Deflecting Adversarial Attacks by Randomly
Deploying Adversarially-Disjoint Models [71.91835408379602]
adversarial examples have been long considered a real threat to machine learning models.
We propose an alternative deployment-based defense paradigm that goes beyond the traditional white-box and black-box threat models.
arXiv Detail & Related papers (2021-02-09T20:07:13Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.