On the Robustness of 3D Object Detectors
- URL: http://arxiv.org/abs/2207.10205v1
- Date: Wed, 20 Jul 2022 21:47:15 GMT
- Title: On the Robustness of 3D Object Detectors
- Authors: Fatima Albreiki, Sultan Abughazal, Jean Lahoud, Rao Anwer, Hisham
Cholakkal, and Fahad Khan
- Abstract summary: 3D scenes exhibit a lot of variations and are prone to sensor inaccuracies as well as information loss during pre-processing.
This work aims to analyze and benchmark popular point-based 3D object detectors against several data corruptions.
- Score: 9.467525852900007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, significant progress has been achieved for 3D object
detection on point clouds thanks to the advances in 3D data collection and deep
learning techniques. Nevertheless, 3D scenes exhibit a lot of variations and
are prone to sensor inaccuracies as well as information loss during
pre-processing. Thus, it is crucial to design techniques that are robust
against these variations. This requires a detailed analysis and understanding
of the effect of such variations. This work aims to analyze and benchmark
popular point-based 3D object detectors against several data corruptions. To
the best of our knowledge, we are the first to investigate the robustness of
point-based 3D object detectors. To this end, we design and evaluate
corruptions that involve data addition, reduction, and alteration. We further
study the robustness of different modules against local and global variations.
Our experimental results reveal several intriguing findings. For instance, we
show that methods that integrate Transformers at a patch or object level lead
to increased robustness, compared to using Transformers at the point level.
Related papers
- SSD-MonoDETR: Supervised Scale-aware Deformable Transformer for
Monocular 3D Object Detection [28.575174815764566]
This paper proposes a novel "Supervised Scale-aware Deformable Attention" (SSDA) for monocular 3D object detection.
Imposing the scale awareness, SSDA could well predict the accurate receptive field of an object query.
SSDA significantly improves the detection accuracy, especially on moderate and hard objects.
arXiv Detail & Related papers (2023-05-12T06:17:57Z) - Towards Model Generalization for Monocular 3D Object Detection [57.25828870799331]
We present an effective unified camera-generalized paradigm (CGP) for Mono3D object detection.
We also propose the 2D-3D geometry-consistent object scaling strategy (GCOS) to bridge the gap via an instance-level augment.
Our method called DGMono3D achieves remarkable performance on all evaluated datasets and surpasses the SoTA unsupervised domain adaptation scheme.
arXiv Detail & Related papers (2022-05-23T23:05:07Z) - Homography Loss for Monocular 3D Object Detection [54.04870007473932]
A differentiable loss function, termed as Homography Loss, is proposed to achieve the goal, which exploits both 2D and 3D information.
Our method yields the best performance compared with the other state-of-the-arts by a large margin on KITTI 3D datasets.
arXiv Detail & Related papers (2022-04-02T03:48:03Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - 3D Object Detection for Autonomous Driving: A Survey [14.772968858398043]
3D object detection serves as the core basis of such perception system.
Despite existing efforts, 3D object detection on point clouds is still in its infancy.
Recent state-of-the-art detection methods with their pros and cons are presented.
arXiv Detail & Related papers (2021-06-21T03:17:20Z) - Geometry-aware data augmentation for monocular 3D object detection [18.67567745336633]
This paper focuses on monocular 3D object detection, one of the essential modules in autonomous driving systems.
A key challenge is that the depth recovery problem is ill-posed in monocular data.
We conduct a thorough analysis to reveal how existing methods fail to robustly estimate depth when different geometry shifts occur.
We convert the aforementioned manipulations into four corresponding 3D-aware data augmentation techniques.
arXiv Detail & Related papers (2021-04-12T23:12:48Z) - Delving into Localization Errors for Monocular 3D Object Detection [85.77319416168362]
Estimating 3D bounding boxes from monocular images is an essential component in autonomous driving.
In this work, we quantify the impact introduced by each sub-task and find the localization error' is the vital factor in restricting monocular 3D detection.
arXiv Detail & Related papers (2021-03-30T10:38:01Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z) - Associate-3Ddet: Perceptual-to-Conceptual Association for 3D Point Cloud
Object Detection [64.2159881697615]
Object detection from 3D point clouds remains a challenging task, though recent studies pushed the envelope with the deep learning techniques.
We propose a domain adaptation like approach to enhance the robustness of the feature representation.
Our simple yet effective approach fundamentally boosts the performance of 3D point cloud object detection and achieves the state-of-the-art results.
arXiv Detail & Related papers (2020-06-08T05:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.