A Real-Time Defense Against Object Vanishing Adversarial Patch Attacks for Object Detection in Autonomous Vehicles
- URL: http://arxiv.org/abs/2412.06215v1
- Date: Mon, 09 Dec 2024 05:21:14 GMT
- Title: A Real-Time Defense Against Object Vanishing Adversarial Patch Attacks for Object Detection in Autonomous Vehicles
- Authors: Jaden Mu,
- Abstract summary: ADAV (Adversarial Defense for Autonomous Vehicles) is a novel defense methodology against object vanishing patch attacks.
ADAV runs in real-time and leverages contextual information from prior frames in an AV's video feed.
ADAV is evaluated using real-world driving data from the Berkeley Deep Drive BDD100K dataset.
- Score: 0.0
- License:
- Abstract: Autonomous vehicles (AVs) increasingly use DNN-based object detection models in vision-based perception. Correct detection and classification of obstacles is critical to ensure safe, trustworthy driving decisions. Adversarial patches aim to fool a DNN with intentionally generated patterns concentrated in a localized region of an image. In particular, object vanishing patch attacks can cause object detection models to fail to detect most or all objects in a scene, posing a significant practical threat to AVs. This work proposes ADAV (Adversarial Defense for Autonomous Vehicles), a novel defense methodology against object vanishing patch attacks specifically designed for autonomous vehicles. Unlike existing defense methods which have high latency or are designed for static images, ADAV runs in real-time and leverages contextual information from prior frames in an AV's video feed. ADAV checks if the object detector's output for the target frame is temporally consistent with the output from a previous reference frame to detect the presence of a patch. If the presence of a patch is detected, ADAV uses gradient-based attribution to localize adversarial pixels that break temporal consistency. This two stage procedure allows ADAV to efficiently process clean inputs, and both stages are optimized to be low latency. ADAV is evaluated using real-world driving data from the Berkeley Deep Drive BDD100K dataset, and demonstrates high adversarial and clean performance.
Related papers
- ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving [30.286501966393388]
A digital hijacking attack has been proposed to cause dangerous driving scenarios.
We introduce a novel physical-world adversarial patch attack, ControlLoc, designed to exploit hijacking vulnerabilities in entire Autonomous Driving (AD) visual perception.
arXiv Detail & Related papers (2024-06-09T14:53:50Z) - Model Agnostic Defense against Adversarial Patch Attacks on Object Detection in Unmanned Aerial Vehicles [0.27309692684728615]
Object detection forms a key component in Unmanned Aerial Vehicles (UAVs)
adversarial patch attacks on an onboard object detector can severely impair the performance of upstream tasks.
This paper proposes a novel model-agnostic defense mechanism against the threat of adversarial patch attacks.
arXiv Detail & Related papers (2024-05-29T15:19:07Z) - EVD4UAV: An Altitude-Sensitive Benchmark to Evade Vehicle Detection in UAV [19.07281015014683]
Vehicle detection in Unmanned Aerial Vehicle (UAV) captured images has wide applications in aerial photography and remote sensing.
Recent studies show that adding an adversarial patch on objects can fool the well-trained deep neural networks based object detectors.
We propose a new dataset named EVD4UAV as an altitude-sensitive benchmark to evade vehicle detection in UAV.
arXiv Detail & Related papers (2024-03-08T16:19:39Z) - Threatening Patch Attacks on Object Detection in Optical Remote Sensing
Images [55.09446477517365]
Advanced Patch Attacks (PAs) on object detection in natural images have pointed out the great safety vulnerability in methods based on deep neural networks.
We propose a more Threatening PA without the scarification of the visual quality, dubbed TPA.
To the best of our knowledge, this is the first attempt to study the PAs on object detection in O-RSIs, and we hope this work can get our readers interested in studying this topic.
arXiv Detail & Related papers (2023-02-13T02:35:49Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Robust Object Detection via Instance-Level Temporal Cycle Confusion [89.1027433760578]
We study the effectiveness of auxiliary self-supervised tasks to improve the out-of-distribution generalization of object detectors.
Inspired by the principle of maximum entropy, we introduce a novel self-supervised task, instance-level temporal cycle confusion (CycConf)
For each object, the task is to find the most different object proposals in the adjacent frame in a video and then cycle back to itself for self-supervision.
arXiv Detail & Related papers (2021-04-16T21:35:08Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Dynamic Adversarial Patch for Evading Object Detection Models [47.32228513808444]
We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
arXiv Detail & Related papers (2020-10-25T08:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.