Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses
- URL: http://arxiv.org/abs/2011.14969v1
- Date: Mon, 30 Nov 2020 16:39:39 GMT
- Title: Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses
- Authors: Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, R. Venkatesh
Babu
- Abstract summary: We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
- Score: 59.58128343334556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in the development of adversarial attacks have been fundamental to
the progress of adversarial defense research. Efficient and effective attacks
are crucial for reliable evaluation of defenses, and also for developing robust
models. Adversarial attacks are often generated by maximizing standard losses
such as the cross-entropy loss or maximum-margin loss within a constraint set
using Projected Gradient Descent (PGD). In this work, we introduce a relaxation
term to the standard loss, that finds more suitable gradient-directions,
increases attack efficacy and leads to more efficient adversarial training. We
propose Guided Adversarial Margin Attack (GAMA), which utilizes function
mapping of the clean image to guide the generation of adversaries, thereby
resulting in stronger attacks. We evaluate our attack against multiple defenses
and show improved performance when compared to existing attacks. Further, we
propose Guided Adversarial Training (GAT), which achieves state-of-the-art
performance amongst single-step defenses by utilizing the proposed relaxation
term for both attack generation and training.
Related papers
- Fast Preemption: Forward-Backward Cascade Learning for Efficient and Transferable Proactive Adversarial Defense [13.252842556505174]
Deep learning technology has become untrustworthy due to its sensitivity to adversarial attacks.
We have devised a proactive strategy that preempts by safeguarding media upfront.
We have also devised the first, to our knowledge, effective white-box adaptive reversion attack.
arXiv Detail & Related papers (2024-07-22T10:23:44Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Deep-Attack over the Deep Reinforcement Learning [26.272161868927004]
adversarial attack developments have made reinforcement learning more vulnerable.
We propose a reinforcement learning-based attacking framework by considering the effectiveness and stealthy spontaneously.
We also propose a new metric to evaluate the performance of the attack model in these two aspects.
arXiv Detail & Related papers (2022-05-02T10:58:19Z) - Scale-Invariant Adversarial Attack for Evaluating and Enhancing
Adversarial Defenses [22.531976474053057]
Projected Gradient Descent (PGD) attack has been demonstrated to be one of the most successful adversarial attacks.
We propose Scale-Invariant Adversarial Attack (SI-PGD), which utilizes the angle between the features in the penultimate layer and the weights in the softmax layer to guide the generation of adversaries.
arXiv Detail & Related papers (2022-01-29T08:40:53Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.