Benchmarking Robustness of 3D Object Detection to Common Corruptions in
Autonomous Driving
- URL: http://arxiv.org/abs/2303.11040v1
- Date: Mon, 20 Mar 2023 11:45:54 GMT
- Title: Benchmarking Robustness of 3D Object Detection to Common Corruptions in
Autonomous Driving
- Authors: Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao
Yang, Hang Su, Xingxing Wei, Jun Zhu
- Abstract summary: Existing 3D detectors lack robustness to real-world corruptions caused by adverse weathers, sensor noises, etc.
We benchmark 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios.
We conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their robustness.
- Score: 44.753797839280516
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D object detection is an important task in autonomous driving to perceive
the surroundings. Despite the excellent performance, the existing 3D detectors
lack the robustness to real-world corruptions caused by adverse weathers,
sensor noises, etc., provoking concerns about the safety and reliability of
autonomous driving systems. To comprehensively and rigorously benchmark the
corruption robustness of 3D detectors, in this paper we design 27 types of
common corruptions for both LiDAR and camera inputs considering real-world
driving scenarios. By synthesizing these corruptions on public datasets, we
establish three corruption robustness benchmarks -- KITTI-C, nuScenes-C, and
Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object
detection models to evaluate their corruption robustness. Based on the
evaluation results, we draw several important findings, including: 1)
motion-level corruptions are the most threatening ones that lead to significant
performance drop of all models; 2) LiDAR-camera fusion models demonstrate
better robustness; 3) camera-only models are extremely vulnerable to image
corruptions, showing the indispensability of LiDAR point clouds. We release the
benchmarks and codes at https://github.com/kkkcx/3D_Corruptions_AD. We hope
that our benchmarks and findings can provide insights for future research on
developing robust 3D object detection models.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - MultiCorrupt: A Multi-Modal Robustness Dataset and Benchmark of LiDAR-Camera Fusion for 3D Object Detection [5.462358595564476]
Multi-modal 3D object detection models for automated driving have demonstrated exceptional performance on computer vision benchmarks like nuScenes.
However, their reliance on densely sampled LiDAR point clouds and meticulously calibrated sensor arrays poses challenges for real-world applications.
We introduce MultiCorrupt, a benchmark designed to evaluate the robustness of multi-modal 3D object detectors against ten distinct types of corruptions.
arXiv Detail & Related papers (2024-02-18T18:56:13Z) - FocalFormer3D : Focusing on Hard Instance for 3D Object Detection [97.56185033488168]
False negatives (FN) in 3D object detection can lead to potentially dangerous situations in autonomous driving.
In this work, we propose Hard Instance Probing (HIP), a general pipeline that identifies textitFN in a multi-stage manner.
We instantiate this method as FocalFormer3D, a simple yet effective detector that excels at excavating difficult objects.
arXiv Detail & Related papers (2023-08-08T20:06:12Z) - Robo3D: Towards Robust and Reliable 3D Perception against Corruptions [58.306694836881235]
We present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios.
We consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure.
We propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency.
arXiv Detail & Related papers (2023-03-30T17:59:17Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection
Sensor Fusion Models [16.823829387723524]
We analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks.
We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks.
arXiv Detail & Related papers (2021-09-13T23:38:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.