SHIFT3D: Synthesizing Hard Inputs For Tricking 3D Detectors
- URL: http://arxiv.org/abs/2309.05810v1
- Date: Mon, 11 Sep 2023 20:28:18 GMT
- Title: SHIFT3D: Synthesizing Hard Inputs For Tricking 3D Detectors
- Authors: Hongge Chen, Zhao Chen, Gregory P. Meyer, Dennis Park, Carl Vondrick,
Ashish Shrivastava, Yuning Chai
- Abstract summary: We present SHIFT3D, a differentiable pipeline for generating 3D shapes that are structurally plausible yet challenging to 3D object detectors.
In safety-critical applications like autonomous driving, discovering such novel challenging objects can offer insight into unknown vulnerabilities of 3D detectors.
- Score: 37.80095745939221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SHIFT3D, a differentiable pipeline for generating 3D shapes that
are structurally plausible yet challenging to 3D object detectors. In
safety-critical applications like autonomous driving, discovering such novel
challenging objects can offer insight into unknown vulnerabilities of 3D
detectors. By representing objects with a signed distanced function (SDF), we
show that gradient error signals allow us to smoothly deform the shape or pose
of a 3D object in order to confuse a downstream 3D detector. Importantly, the
objects generated by SHIFT3D physically differ from the baseline object yet
retain a semantically recognizable shape. Our approach provides interpretable
failure modes for modern 3D object detectors, and can aid in preemptive
discovery of potential safety risks within 3D perception systems before these
risks become critical failures.
Related papers
- PointAD: Comprehending 3D Anomalies from Points and Pixels for Zero-shot 3D Anomaly Detection [13.60524473223155]
This paper introduces PointAD, a novel approach that transfers the strong generalization capabilities of CLIP for recognizing 3D anomalies on unseen objects.
PointAD renders 3D anomalies into multiple 2D renderings and projects them back into 3D space.
Our model can directly integrate RGB information, further enhancing the understanding of 3D anomalies in a plug-and-play manner.
arXiv Detail & Related papers (2024-10-01T01:40:22Z) - VSRD: Instance-Aware Volumetric Silhouette Rendering for Weakly Supervised 3D Object Detection [11.061100776969383]
Monocular 3D object detection poses a significant challenge in 3D scene understanding.
Existing methods heavily rely on supervised learning using abundant 3D labels.
We propose a novel weakly supervised 3D object detection framework named VSRD.
arXiv Detail & Related papers (2024-03-29T20:43:55Z) - FocalFormer3D : Focusing on Hard Instance for 3D Object Detection [97.56185033488168]
False negatives (FN) in 3D object detection can lead to potentially dangerous situations in autonomous driving.
In this work, we propose Hard Instance Probing (HIP), a general pipeline that identifies textitFN in a multi-stage manner.
We instantiate this method as FocalFormer3D, a simple yet effective detector that excels at excavating difficult objects.
arXiv Detail & Related papers (2023-08-08T20:06:12Z) - MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices [78.20154723650333]
High-quality 3D ground-truth shapes are critical for 3D object reconstruction evaluation.
We introduce a novel multi-view RGBD dataset captured using a mobile device.
We obtain precise 3D ground-truth shape without relying on high-end 3D scanners.
arXiv Detail & Related papers (2023-03-03T14:02:50Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Voxel-based 3D Detection and Reconstruction of Multiple Objects from a
Single Image [22.037472446683765]
We learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator.
Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space.
We devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation.
arXiv Detail & Related papers (2021-11-04T18:30:37Z) - LIGA-Stereo: Learning LiDAR Geometry Aware Representations for
Stereo-based 3D Detector [80.7563981951707]
We propose LIGA-Stereo to learn stereo-based 3D detectors under the guidance of high-level geometry-aware representations of LiDAR-based detection models.
Compared with the state-of-the-art stereo detector, our method has improved the 3D detection performance of cars, pedestrians, cyclists by 10.44%, 5.69%, 5.97% mAP respectively.
arXiv Detail & Related papers (2021-08-18T17:24:40Z) - Monocular Differentiable Rendering for Self-Supervised 3D Object
Detection [21.825158925459732]
3D object detection from monocular images is an ill-posed problem due to the projective entanglement of depth and scale.
We present a novel self-supervised method for textured 3D shape reconstruction and pose estimation of rigid objects.
Our method predicts the 3D location and meshes of each object in an image using differentiable rendering and a self-supervised objective.
arXiv Detail & Related papers (2020-09-30T09:21:43Z) - DSGN: Deep Stereo Geometry Network for 3D Object Detection [79.16397166985706]
There is a large performance gap between image-based and LiDAR-based 3D object detectors.
Our method, called Deep Stereo Geometry Network (DSGN), significantly reduces this gap.
For the first time, we provide a simple and effective one-stage stereo-based 3D detection pipeline.
arXiv Detail & Related papers (2020-01-10T11:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.