Resilience of Autonomous Vehicle Object Category Detection to Universal
Adversarial Perturbations
- URL: http://arxiv.org/abs/2107.04749v1
- Date: Sat, 10 Jul 2021 03:40:25 GMT
- Title: Resilience of Autonomous Vehicle Object Category Detection to Universal
Adversarial Perturbations
- Authors: Mohammad Nayeem Teli and Seungwon Oh
- Abstract summary: We evaluate the impact of universal perturbations on object detection at a class-level.
We use Faster-RCNN object detector on images of five different categories: person, car, traffic light, truck, stop sign and traffic light.
Our results indicate that person, car, traffic light, truck and stop sign are resilient in that order (most to at least) to universal perturbations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the vulnerability of deep neural networks to adversarial examples,
numerous works on adversarial attacks and defenses have been burgeoning over
the past several years. However, there seem to be some conventional views
regarding adversarial attacks and object detection approaches that most
researchers take for granted. In this work, we bring a fresh perspective on
those procedures by evaluating the impact of universal perturbations on object
detection at a class-level. We apply it to a carefully curated data set related
to autonomous driving. We use Faster-RCNN object detector on images of five
different categories: person, car, truck, stop sign and traffic light from the
COCO data set, while carefully perturbing the images using Universal Dense
Object Suppression algorithm. Our results indicate that person, car, traffic
light, truck and stop sign are resilient in that order (most to least) to
universal perturbations. To the best of our knowledge, this is the first time
such a ranking has been established which is significant for the security of
the data sets pertaining to autonomous vehicles and object detection in
general.
Related papers
- TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving
Scenarios [3.236217153362305]
We present an effective attack strategy aiming the objectness aspect of visual detection in autonomous vehicles.
Experiments show that the proposed attack targeting the objectness aspect is 45.17% and 43.50% more effective than those generated from classification and/or localization losses.
The proposed adversarial defense approach can improve the detectors' robustness against objectness-oriented attacks by up to 21% and 12% mAP on KITTI and COCO_traffic, respectively.
arXiv Detail & Related papers (2022-02-10T00:47:36Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - Identification of Driver Phone Usage Violations via State-of-the-Art
Object Detection with Tracking [8.147652597876862]
We propose a custom-trained state-of-the-art object detector to work with roadside cameras to capture driver phone usage without the need for human intervention.
The proposed approach also addresses the issues caused by windscreen glare and introduces the steps required to remedy this.
arXiv Detail & Related papers (2021-09-05T16:37:03Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - Detecting Invisible People [58.49425715635312]
We re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects.
We demonstrate that current detection and tracking systems perform dramatically worse on this task.
Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks.
arXiv Detail & Related papers (2020-12-15T16:54:45Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Synthesizing Unrestricted False Positive Adversarial Objects Using
Generative Models [0.0]
Adversarial examples are data points misclassified by neural networks.
Recent work introduced the concept of unrestricted adversarial examples.
We introduce a new category of attacks that create unrestricted adversarial examples for object detection.
arXiv Detail & Related papers (2020-05-19T08:58:58Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.