Dual Attention Suppression Attack: Generate Adversarial Camouflage in
Physical World
- URL: http://arxiv.org/abs/2103.01050v1
- Date: Mon, 1 Mar 2021 14:46:43 GMT
- Title: Dual Attention Suppression Attack: Generate Adversarial Camouflage in
Physical World
- Authors: Jiakai Wang, Aishan Liu, Zixin Yin, Shunchang Liu, Shiyu Tang, and
Xianglong Liu
- Abstract summary: Motivated by the viewpoint that attention reflects the intrinsic characteristics of the recognition process, this paper proposes the Dual Attention Suppression (DAS) attack.
We generate transferable adversarial camouflages by distracting the model-shared similar attention patterns from the target to non-target regions.
Based on the fact that human visual attention always focuses on salient items, we evade the human-specific bottom-up attention to generate visually-natural camouflages.
- Score: 33.63565658548095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models are vulnerable to adversarial examples. As a more
threatening type for practical deep learning systems, physical adversarial
examples have received extensive research attention in recent years. However,
without exploiting the intrinsic characteristics such as model-agnostic and
human-specific patterns, existing works generate weak adversarial perturbations
in the physical world, which fall short of attacking across different models
and show visually suspicious appearance. Motivated by the viewpoint that
attention reflects the intrinsic characteristics of the recognition process,
this paper proposes the Dual Attention Suppression (DAS) attack to generate
visually-natural physical adversarial camouflages with strong transferability
by suppressing both model and human attention. As for attacking, we generate
transferable adversarial camouflages by distracting the model-shared similar
attention patterns from the target to non-target regions. Meanwhile, based on
the fact that human visual attention always focuses on salient items (e.g.,
suspicious distortions), we evade the human-specific bottom-up attention to
generate visually-natural camouflages which are correlated to the scenario
context. We conduct extensive experiments in both the digital and physical
world for classification and detection tasks on up-to-date models (e.g.,
Yolo-V5) and significantly demonstrate that our method outperforms
state-of-the-art methods.
Related papers
- A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of
Natural and Perturbed Patterns [74.03108122774098]
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations.
This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis.
We propose a Feature-Focusing Adversarial Training (F$2$AT) which enforces the model to focus on the core features from natural patterns.
arXiv Detail & Related papers (2023-10-23T04:31:42Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises [7.689542442882423]
We designed a dual-stream vision model inspired by the human brain.
This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation.
We evaluated this model against various benchmarks in terms of object recognition, gaze behavior and adversarial robustness.
arXiv Detail & Related papers (2022-06-15T03:44:42Z) - Transferable Physical Attack against Object Detection with Separable
Attention [14.805375472459728]
Transferable adversarial attack is always in the spotlight since deep learning models have been demonstrated to be vulnerable to adversarial samples.
In this paper, we put forward a novel method of generating physically realizable adversarial camouflage to achieve transferable attack against detection models.
arXiv Detail & Related papers (2022-05-19T14:34:55Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - Spatiotemporal Attacks for Embodied Agents [119.43832001301041]
We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
arXiv Detail & Related papers (2020-05-19T01:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.