Detecting Adversarial Perturbations in Multi-Task Perception
- URL: http://arxiv.org/abs/2203.01177v1
- Date: Wed, 2 Mar 2022 15:25:17 GMT
- Title: Detecting Adversarial Perturbations in Multi-Task Perception
- Authors: Marvin Klingner and Varun Ravi Kumar and Senthil Yogamani and Andreas
B\"ar and Tim Fingscheidt
- Abstract summary: We propose a novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks.
adversarial perturbations are detected by inconsistencies between extracted edges of the input image, the depth output, and the segmentation output.
We show that under an assumption of a 5% false positive rate up to 100% of images are correctly detected as adversarially perturbed, depending on the strength of the perturbation.
- Score: 32.9951531295576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep neural networks (DNNs) achieve impressive performance on
environment perception tasks, their sensitivity to adversarial perturbations
limits their use in practical applications. In this paper, we (i) propose a
novel adversarial perturbation detection scheme based on multi-task perception
of complex vision tasks (i.e., depth estimation and semantic segmentation).
Specifically, adversarial perturbations are detected by inconsistencies between
extracted edges of the input image, the depth output, and the segmentation
output. To further improve this technique, we (ii) develop a novel edge
consistency loss between all three modalities, thereby improving their initial
consistency which in turn supports our detection scheme. We verify our
detection scheme's effectiveness by employing various known attacks and image
noises. In addition, we (iii) develop a multi-task adversarial attack, aiming
at fooling both tasks as well as our detection scheme. Experimental evaluation
on the Cityscapes and KITTI datasets shows that under an assumption of a 5%
false positive rate up to 100% of images are correctly detected as
adversarially perturbed, depending on the strength of the perturbation. Code
will be available on github. A short video at https://youtu.be/KKa6gOyWmH4
provides qualitative results.
Related papers
- AdvLogo: Adversarial Patch Attack against Object Detectors based on Diffusion Models [12.678320577368051]
We propose a novel framework of patch attack from semantic perspective, which we refer to as AdvLogo.
We leverage the semantic understanding of the diffusion denoising process and drive the process to adversarial subareas by perturbing the latent and unconditional embeddings at the last timestep.
Experimental results demonstrate that AdvLogo achieves strong attack performance while maintaining high visual quality.
arXiv Detail & Related papers (2024-09-11T04:30:45Z) - Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - On Inherent Adversarial Robustness of Active Vision Systems [7.803487547944363]
We show that two active vision methods - GFNet and FALcon - achieve (2-3) times greater robustness compared to a standard passive convolutional network under state-of-the-art adversarial attacks.
More importantly, we provide illustrative and interpretable visualization analysis that demonstrates how performing inference from distinct fixation points makes active vision methods less vulnerable to malicious inputs.
arXiv Detail & Related papers (2024-03-29T22:51:45Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Adversarial Attacks on Multi-task Visual Perception for Autonomous
Driving [0.5735035463793008]
adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection.
Experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all the others.
We conclude this paper by comparing and discussing the experimental results, proposing insights and future work.
arXiv Detail & Related papers (2021-07-15T16:53:48Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.