Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving
Scenarios
- URL: http://arxiv.org/abs/2202.04781v1
- Date: Thu, 10 Feb 2022 00:47:36 GMT
- Title: Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving
Scenarios
- Authors: Jung Im Choi, Qing Tian
- Abstract summary: We present an effective attack strategy aiming the objectness aspect of visual detection in autonomous vehicles.
Experiments show that the proposed attack targeting the objectness aspect is 45.17% and 43.50% more effective than those generated from classification and/or localization losses.
The proposed adversarial defense approach can improve the detectors' robustness against objectness-oriented attacks by up to 21% and 12% mAP on KITTI and COCO_traffic, respectively.
- Score: 3.236217153362305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual detection is a key task in autonomous driving, and it serves as one
foundation for self-driving planning and control. Deep neural networks have
achieved promising results in various computer vision tasks, but they are known
to be vulnerable to adversarial attacks. A comprehensive understanding of deep
visual detectors' vulnerability is required before people can improve their
robustness. However, only a few adversarial attack/defense works have focused
on object detection, and most of them employed only classification and/or
localization losses, ignoring the objectness aspect. In this paper, we identify
a serious objectness-related adversarial vulnerability in YOLO detectors and
present an effective attack strategy aiming the objectness aspect of visual
detection in autonomous vehicles. Furthermore, to address such vulnerability,
we propose a new objectness-aware adversarial training approach for visual
detection. Experiments show that the proposed attack targeting the objectness
aspect is 45.17% and 43.50% more effective than those generated from
classification and/or localization losses on the KITTI and COCO_traffic
datasets, respectively. Also, the proposed adversarial defense approach can
improve the detectors' robustness against objectness-oriented attacks by up to
21% and 12% mAP on KITTI and COCO_traffic, respectively.
Related papers
- A Survey and Evaluation of Adversarial Attacks for Object Detection [11.48212060875543]
Deep learning models excel in various computer vision tasks but are susceptible to adversarial examples-subtle perturbations in input data that lead to incorrect predictions.
This vulnerability poses significant risks in safety-critical applications such as autonomous vehicles, security surveillance, and aircraft health monitoring.
arXiv Detail & Related papers (2024-08-04T05:22:08Z) - ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving [30.286501966393388]
A digital hijacking attack has been proposed to cause dangerous driving scenarios.
We introduce a novel physical-world adversarial patch attack, ControlLoc, designed to exploit hijacking vulnerabilities in entire Autonomous Driving (AD) visual perception.
arXiv Detail & Related papers (2024-06-09T14:53:50Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Understanding Object Detection Through An Adversarial Lens [14.976840260248913]
This paper presents a framework for analyzing and evaluating vulnerabilities of deep object detectors under an adversarial lens.
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
arXiv Detail & Related papers (2020-07-11T18:41:47Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time
Object Detection Systems [14.976840260248913]
This paper presents three Targeted adversarial Objectness Gradient attacks to cause object-vanishing, object-fabrication, and object-mislabeling attacks.
We also present a universal objectness gradient attack to use adversarial transferability for black-box attacks.
The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems.
arXiv Detail & Related papers (2020-04-09T01:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.