AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks
- URL: http://arxiv.org/abs/2002.08439v1
- Date: Wed, 19 Feb 2020 20:46:54 GMT
- Title: AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks
- Authors: Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin
- Abstract summary: Deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars.
Conventional defense methods, although shown to be promising, are largely limited by their single-source single-cost nature.
We show that the multi-source nature of AdvMS mitigates the performance plateauing issue and the multi-cost nature enables improving robustness at a flexible and adjustable combination of costs.
- Score: 81.45930614122925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing effective defense against adversarial attacks is a crucial topic as
deep neural networks have been proliferated rapidly in many security-critical
domains such as malware detection and self-driving cars. Conventional defense
methods, although shown to be promising, are largely limited by their
single-source single-cost nature: The robustness promotion tends to plateau
when the defenses are made increasingly stronger while the cost tends to
amplify. In this paper, we study principles of designing multi-source and
multi-cost schemes where defense performance is boosted from multiple defending
components. Based on this motivation, we propose a multi-source and multi-cost
defense scheme, Adversarially Trained Model Switching (AdvMS), that inherits
advantages from two leading schemes: adversarial training and random model
switching. We show that the multi-source nature of AdvMS mitigates the
performance plateauing issue and the multi-cost nature enables improving
robustness at a flexible and adjustable combination of costs over different
factors which can better suit specific restrictions and needs in practice.
Related papers
- MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks [21.227398434694724]
We introduce an innovative framework that incorporates a precision-optimized noise predictor to enhance the effectiveness of our attack framework.
Our framework provides a cutting-edge solution for multi-modal adversarial attacks, ensuring reduced latency.
We demonstrate that our framework achieves outstanding transferability and robustness against purification defenses.
arXiv Detail & Related papers (2024-10-17T23:52:39Z) - Hyper Adversarial Tuning for Boosting Adversarial Robustness of Pretrained Large Vision Models [9.762046320216005]
Large vision models have been found vulnerable to adversarial examples, emphasizing the need for enhancing their adversarial robustness.
Recent approaches propose robust fine-tuning methods, such as adversarial tuning of low-rank adaptation (LoRA) in large vision models, but they still struggle to match the accuracy of full parameter adversarial fine-tuning.
We propose hyper adversarial tuning (HyperAT), which leverages shared defensive knowledge among different methods to improve model robustness efficiently and effectively simultaneously.
arXiv Detail & Related papers (2024-10-08T12:05:01Z) - Position: Towards Resilience Against Adversarial Examples [42.09231029292568]
We provide a definition of adversarial resilience and outline considerations of designing an adversarially resilient defense.
We then introduce a subproblem of adversarial resilience which we call continual adaptive robustness.
We demonstrate the connection between continual adaptive robustness and previously studied problems of multiattack robustness and unforeseen attack robustness.
arXiv Detail & Related papers (2024-05-02T14:58:44Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation
Robustness via Hypernetworks [47.21491911505409]
Adrial training serves as one of the most popular and effective methods to defend against adversarial perturbations.
We propose a novel multi-perturbation adversarial training framework, parameter-saving adversarial training (PSAT), to reinforce multi-perturbation robustness.
arXiv Detail & Related papers (2023-09-28T07:16:02Z) - Cooperation or Competition: Avoiding Player Domination for Multi-Target
Robustness via Adaptive Budgets [76.20705291443208]
We view adversarial attacks as a bargaining game in which different players negotiate to reach an agreement on a joint direction of parameter updating.
We design a novel framework that adjusts the budgets of different adversaries to avoid any player dominance.
Experiments on standard benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
arXiv Detail & Related papers (2023-06-27T14:02:10Z) - "What's in the box?!": Deflecting Adversarial Attacks by Randomly
Deploying Adversarially-Disjoint Models [71.91835408379602]
adversarial examples have been long considered a real threat to machine learning models.
We propose an alternative deployment-based defense paradigm that goes beyond the traditional white-box and black-box threat models.
arXiv Detail & Related papers (2021-02-09T20:07:13Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.