Towards a Robust Sensor Fusion Step for 3D Object Detection on Corrupted
Data
- URL: http://arxiv.org/abs/2306.07344v1
- Date: Mon, 12 Jun 2023 18:06:29 GMT
- Title: Towards a Robust Sensor Fusion Step for 3D Object Detection on Corrupted
Data
- Authors: Maciej K. Wozniak, Viktor Karefjards, Marko Thiel, Patric Jensfelt
- Abstract summary: This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust.
We demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data.
- Score: 4.3012765978447565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodal sensor fusion methods for 3D object detection have been
revolutionizing the autonomous driving research field. Nevertheless, most of
these methods heavily rely on dense LiDAR data and accurately calibrated
sensors which is often not the case in real-world scenarios. Data from LiDAR
and cameras often come misaligned due to the miscalibration, decalibration, or
different frequencies of the sensors. Additionally, some parts of the LiDAR
data may be occluded and parts of the data may be missing due to hardware
malfunction or weather conditions. This work presents a novel fusion step that
addresses data corruptions and makes sensor fusion for 3D object detection more
robust. Through extensive experiments, we demonstrate that our method performs
on par with state-of-the-art approaches on normal data and outperforms them on
misaligned data.
Related papers
- Better Monocular 3D Detectors with LiDAR from the Past [64.6759926054061]
Camera-based 3D detectors often suffer inferior performance compared to LiDAR-based counterparts due to inherent depth ambiguities in images.
In this work, we seek to improve monocular 3D detectors by leveraging unlabeled historical LiDAR data.
We show consistent and significant performance gain across multiple state-of-the-art models and datasets with a negligible additional latency of 9.66 ms and a small storage cost.
arXiv Detail & Related papers (2024-04-08T01:38:43Z) - MultiCorrupt: A Multi-Modal Robustness Dataset and Benchmark of LiDAR-Camera Fusion for 3D Object Detection [5.462358595564476]
Multi-modal 3D object detection models for automated driving have demonstrated exceptional performance on computer vision benchmarks like nuScenes.
However, their reliance on densely sampled LiDAR point clouds and meticulously calibrated sensor arrays poses challenges for real-world applications.
We introduce MultiCorrupt, a benchmark designed to evaluate the robustness of multi-modal 3D object detectors against ten distinct types of corruptions.
arXiv Detail & Related papers (2024-02-18T18:56:13Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks [61.74608497496841]
Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities.
This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction.
arXiv Detail & Related papers (2023-03-26T22:32:44Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Frustum Fusion: Pseudo-LiDAR and LiDAR Fusion for 3D Detection [0.0]
We propose a novel data fusion algorithm to combine accurate point clouds with dense but less accurate point clouds obtained from stereo pairs.
We train multiple 3D object detection methods and show that our fusion strategy consistently improves the performance of detectors.
arXiv Detail & Related papers (2021-11-08T19:29:59Z) - Radar Voxel Fusion for 3D Object Detection [0.0]
This paper develops a low-level sensor fusion network for 3D object detection.
The radar sensor fusion proves especially beneficial in inclement conditions such as rain and night scenes.
arXiv Detail & Related papers (2021-06-26T20:34:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.