Using 3D Shadows to Detect Object Hiding Attacks on Autonomous Vehicle
Perception
- URL: http://arxiv.org/abs/2204.13973v1
- Date: Fri, 29 Apr 2022 09:49:29 GMT
- Title: Using 3D Shadows to Detect Object Hiding Attacks on Autonomous Vehicle
Perception
- Authors: Zhongyuan Hau, Soteris Demetriou, Emil C. Lupu
- Abstract summary: We leverage 3D shadows to locate obstacles that are hidden from object detectors.
Our proposed methodology can be used to detect an object that has been hidden by an adversary as these objects.
We show that using 3D shadows for obstacle detection can achieve high accuracy in matching shadows to their object.
- Score: 6.371941066890801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous Vehicles (AVs) are mostly reliant on LiDAR sensors which enable
spatial perception of their surroundings and help make driving decisions.
Recent works demonstrated attacks that aim to hide objects from AV perception,
which can result in severe consequences. 3D shadows, are regions void of
measurements in 3D point clouds which arise from occlusions of objects in a
scene. 3D shadows were proposed as a physical invariant valuable for detecting
spoofed or fake objects. In this work, we leverage 3D shadows to locate
obstacles that are hidden from object detectors. We achieve this by searching
for void regions and locating the obstacles that cause these shadows. Our
proposed methodology can be used to detect an object that has been hidden by an
adversary as these objects, while hidden from 3D object detectors, still induce
shadow artifacts in 3D point clouds, which we use for obstacle detection. We
show that using 3D shadows for obstacle detection can achieve high accuracy in
matching shadows to their object and provide precise prediction of an
obstacle's distance from the ego-vehicle.
Related papers
- Street Gaussians without 3D Object Tracker [86.62329193275916]
Existing methods rely on labor-intensive manual labeling of object poses to reconstruct dynamic objects in canonical space and move them based on these poses during rendering.
We propose a stable object tracking module by leveraging associations from 2D deep trackers within a 3D object fusion strategy.
We address inevitable tracking errors by further introducing a motion learning strategy in an implicit feature space that autonomously corrects trajectory errors and recovers missed detections.
arXiv Detail & Related papers (2024-12-07T05:49:42Z) - Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection [54.78470057491049]
Occupancy has emerged as a promising alternative for 3D scene perception.
We introduce object-centric occupancy as a supplement to object bboxes.
We show that our occupancy features significantly enhance the detection results of state-of-the-art 3D object detectors.
arXiv Detail & Related papers (2024-12-06T16:12:38Z) - Improving Distant 3D Object Detection Using 2D Box Supervision [97.80225758259147]
We propose LR3D, a framework that learns to recover the missing depth of distant objects.
Our framework is general, and could widely benefit 3D detection methods to a large extent.
arXiv Detail & Related papers (2024-03-14T09:54:31Z) - SHIFT3D: Synthesizing Hard Inputs For Tricking 3D Detectors [37.80095745939221]
We present SHIFT3D, a differentiable pipeline for generating 3D shapes that are structurally plausible yet challenging to 3D object detectors.
In safety-critical applications like autonomous driving, discovering such novel challenging objects can offer insight into unknown vulnerabilities of 3D detectors.
arXiv Detail & Related papers (2023-09-11T20:28:18Z) - SparseDet: Towards End-to-End 3D Object Detection [12.3069609175534]
We propose SparseDet for end-to-end 3D object detection from point cloud.
As a new detection paradigm, SparseDet maintains a fixed set of learnable proposals to represent latent candidates.
SparseDet achieves highly competitive detection accuracy while running with a more efficient speed of 34.5 FPS.
arXiv Detail & Related papers (2022-06-02T09:49:53Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - 3D Object Detection for Autonomous Driving: A Survey [14.772968858398043]
3D object detection serves as the core basis of such perception system.
Despite existing efforts, 3D object detection on point clouds is still in its infancy.
Recent state-of-the-art detection methods with their pros and cons are presented.
arXiv Detail & Related papers (2021-06-21T03:17:20Z) - FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle
Detection [81.79171905308827]
We propose frustum-aware geometric reasoning (FGR) to detect vehicles in point clouds without any 3D annotations.
Our method consists of two stages: coarse 3D segmentation and 3D bounding box estimation.
It is able to accurately detect objects in 3D space with only 2D bounding boxes and sparse point clouds.
arXiv Detail & Related papers (2021-05-17T07:29:55Z) - Object Removal Attacks on LiDAR-based 3D Object Detectors [6.263478017242508]
Object Removal Attacks (ORAs) aim to force 3D object detectors to fail.
We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest.
Our results show that the attack is effective in degrading the performance of commonly used 3D object detection models.
arXiv Detail & Related papers (2021-02-07T05:34:14Z) - Expandable YOLO: 3D Object Detection from RGB-D Images [64.14512458954344]
This paper aims at constructing a light-weight object detector that inputs a depth and a color image from a stereo camera.
By extending the network architecture of YOLOv3 to 3D in the middle, it is possible to output in the depth direction.
Intersection over Uninon (IoU) in 3D space is introduced to confirm the accuracy of region extraction results.
arXiv Detail & Related papers (2020-06-26T07:32:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.