Towards Benchmarking and Assessing Visual Naturalness of Physical World
Adversarial Attacks
- URL: http://arxiv.org/abs/2305.12863v1
- Date: Mon, 22 May 2023 09:40:32 GMT
- Title: Towards Benchmarking and Assessing Visual Naturalness of Physical World
Adversarial Attacks
- Authors: Simin Li, Shuing Zhang, Gujun Chen, Dong Wang, Pu Feng, Jiakai Wang,
Aishan Liu, Xin Yi, Xianglong Liu
- Abstract summary: In physical world attacks, evaluating naturalness is highly emphasized since human can easily detect and remove unnatural attacks.
In this paper, we take the first step to benchmark and assess visual naturalness of physical world attacks, taking autonomous driving scenario as the first attempt.
We introduce Dual Prior Alignment (DPA) network, which aims to embed human knowledge into model reasoning process.
- Score: 48.42363580408451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physical world adversarial attack is a highly practical and threatening
attack, which fools real world deep learning systems by generating conspicuous
and maliciously crafted real world artifacts. In physical world attacks,
evaluating naturalness is highly emphasized since human can easily detect and
remove unnatural attacks. However, current studies evaluate naturalness in a
case-by-case fashion, which suffers from errors, bias and inconsistencies. In
this paper, we take the first step to benchmark and assess visual naturalness
of physical world attacks, taking autonomous driving scenario as the first
attempt. First, to benchmark attack naturalness, we contribute the first
Physical Attack Naturalness (PAN) dataset with human rating and gaze. PAN
verifies several insights for the first time: naturalness is (disparately)
affected by contextual features (i.e., environmental and semantic variations)
and correlates with behavioral feature (i.e., gaze signal). Second, to
automatically assess attack naturalness that aligns with human ratings, we
further introduce Dual Prior Alignment (DPA) network, which aims to embed human
knowledge into model reasoning process. Specifically, DPA imitates human
reasoning in naturalness assessment by rating prior alignment and mimics human
gaze behavior by attentive prior alignment. We hope our work fosters researches
to improve and automatically assess naturalness of physical world attacks. Our
code and dataset can be found at https://github.com/zhangsn-19/PAN.
Related papers
- Exploring the Naturalness of AI-Generated Images [59.04528584651131]
We take the first step to benchmark and assess the visual naturalness of AI-generated images.
We propose the Joint Objective Image Naturalness evaluaTor (JOINT), to automatically predict the naturalness of AGIs that aligns human ratings.
We demonstrate that JOINT significantly outperforms baselines for providing more subjectively consistent results on naturalness assessment.
arXiv Detail & Related papers (2023-12-09T06:08:09Z) - F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of
Natural and Perturbed Patterns [74.03108122774098]
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations.
This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis.
We propose a Feature-Focusing Adversarial Training (F$2$AT) which enforces the model to focus on the core features from natural patterns.
arXiv Detail & Related papers (2023-10-23T04:31:42Z) - How do humans perceive adversarial text? A reality check on the validity
and naturalness of word-based adversarial attacks [4.297786261992324]
adversarial attacks are malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions.
We surveyed 378 human participants about the perceptibility of text adversarial examples produced by state-of-the-art methods.
Our results underline that existing text attacks are impractical in real-world scenarios where humans are involved.
arXiv Detail & Related papers (2023-05-24T21:52:13Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - A Survey on Physical Adversarial Attack in Computer Vision [7.053905447737444]
Deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial examples crafted by malicious tiny noise.
With the increasing deployment of the DNN-based system in the real world, strengthening the robustness of these systems is an emergency.
arXiv Detail & Related papers (2022-09-28T17:23:52Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.