Detecting Adversarial Perturbations in Multi-Task Perception
- URL: http://arxiv.org/abs/2203.01177v1
- Date: Wed, 2 Mar 2022 15:25:17 GMT
- Title: Detecting Adversarial Perturbations in Multi-Task Perception
- Authors: Marvin Klingner and Varun Ravi Kumar and Senthil Yogamani and Andreas
B\"ar and Tim Fingscheidt
- Abstract summary: We propose a novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks.
adversarial perturbations are detected by inconsistencies between extracted edges of the input image, the depth output, and the segmentation output.
We show that under an assumption of a 5% false positive rate up to 100% of images are correctly detected as adversarially perturbed, depending on the strength of the perturbation.
- Score: 32.9951531295576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep neural networks (DNNs) achieve impressive performance on
environment perception tasks, their sensitivity to adversarial perturbations
limits their use in practical applications. In this paper, we (i) propose a
novel adversarial perturbation detection scheme based on multi-task perception
of complex vision tasks (i.e., depth estimation and semantic segmentation).
Specifically, adversarial perturbations are detected by inconsistencies between
extracted edges of the input image, the depth output, and the segmentation
output. To further improve this technique, we (ii) develop a novel edge
consistency loss between all three modalities, thereby improving their initial
consistency which in turn supports our detection scheme. We verify our
detection scheme's effectiveness by employing various known attacks and image
noises. In addition, we (iii) develop a multi-task adversarial attack, aiming
at fooling both tasks as well as our detection scheme. Experimental evaluation
on the Cityscapes and KITTI datasets shows that under an assumption of a 5%
false positive rate up to 100% of images are correctly detected as
adversarially perturbed, depending on the strength of the perturbation. Code
will be available on github. A short video at https://youtu.be/KKa6gOyWmH4
provides qualitative results.
Related papers
- On Inherent Adversarial Robustness of Active Vision Systems [7.803487547944363]
We show that two active vision methods - GFNet and FALcon - achieve (2-3) times greater robustness compared to a standard passive convolutional network under state-of-the-art adversarial attacks.
More importantly, we provide illustrative and interpretable visualization analysis that demonstrates how performing inference from distinct fixation points makes active vision methods less vulnerable to malicious inputs.
arXiv Detail & Related papers (2024-03-29T22:51:45Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Adversarial Attacks on Multi-task Visual Perception for Autonomous
Driving [0.5735035463793008]
adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection.
Experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all the others.
We conclude this paper by comparing and discussing the experimental results, proposing insights and future work.
arXiv Detail & Related papers (2021-07-15T16:53:48Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Robust Data Hiding Using Inverse Gradient Attention [82.73143630466629]
In the data hiding task, each pixel of cover images should be treated differently since they have divergent tolerabilities.
We propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and attention mechanism.
Empirically, extensive experiments show that the proposed model outperforms the state-of-the-art methods on two prevalent datasets.
arXiv Detail & Related papers (2020-11-21T19:08:23Z) - Connecting the Dots: Detecting Adversarial Perturbations Using Context
Inconsistency [25.039201331256372]
We augment the Deep Neural Network with a system that learns context consistency rules during training and checks for the violations of the same during testing.
Our approach builds a set of auto-encoders, one for each object class, appropriately trained so as to output a discrepancy between the input and output if an added adversarial perturbation violates context consistency rules.
Experiments on PASCAL VOC and MS COCO show that our method effectively detects various adversarial attacks and achieves high ROC-AUC (over 0.95 in most cases)
arXiv Detail & Related papers (2020-07-19T19:46:45Z) - Monocular Depth Estimators: Vulnerabilities and Attacks [6.821598757786515]
Recent advancements of neural networks lead to reliable monocular depth estimation.
Deep neural networks are highly vulnerable to adversarial samples for tasks like classification, detection and segmentation.
In this paper, we investigate the annihilation of the most state-of-the-art monocular depth estimation networks against adversarial attacks.
arXiv Detail & Related papers (2020-05-28T21:25:21Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.