LiDAR-MIMO: Efficient Uncertainty Estimation for LiDAR-based 3D Object
Detection
- URL: http://arxiv.org/abs/2206.00214v1
- Date: Wed, 1 Jun 2022 03:47:32 GMT
- Title: LiDAR-MIMO: Efficient Uncertainty Estimation for LiDAR-based 3D Object
Detection
- Authors: Matthew Pitropov, Chengjie Huang, Vahdat Abdelzad, Krzysztof
Czarnecki, Steven Waslander
- Abstract summary: LiDAR-MIMO is an adaptation of the multi-input multi-output (MIMO) uncertainty estimation method to the LiDAR-based 3D object detection task.
We show comparable uncertainty estimation results with only a small number of output heads.
- Score: 7.9533170503170565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The estimation of uncertainty in robotic vision, such as 3D object detection,
is an essential component in developing safe autonomous systems aware of their
own performance. However, the deployment of current uncertainty estimation
methods in 3D object detection remains challenging due to timing and
computational constraints. To tackle this issue, we propose LiDAR-MIMO, an
adaptation of the multi-input multi-output (MIMO) uncertainty estimation method
to the LiDAR-based 3D object detection task. Our method modifies the original
MIMO by performing multi-input at the feature level to ensure the detection,
uncertainty estimation, and runtime performance benefits are retained despite
the limited capacity of the underlying detector and the large computational
costs of point cloud processing. We compare LiDAR-MIMO with MC dropout and
ensembles as baselines and show comparable uncertainty estimation results with
only a small number of output heads. Further, LiDAR-MIMO can be configured to
be twice as fast as MC dropout and ensembles, while achieving higher mAP than
MC dropout and approaching that of ensembles.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - ODM3D: Alleviating Foreground Sparsity for Semi-Supervised Monocular 3D
Object Detection [15.204935788297226]
ODM3D framework entails cross-modal knowledge distillation at various levels to inject LiDAR-domain knowledge into a monocular detector during training.
By identifying foreground sparsity as the main culprit behind existing methods' suboptimal training, we exploit the precise localisation information embedded in LiDAR points.
Our method ranks 1st in both KITTI validation and test benchmarks, significantly surpassing all existing monocular methods, supervised or semi-supervised.
arXiv Detail & Related papers (2023-10-28T07:12:09Z) - MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential
Deep Learning [13.59039985176011]
We introduce an Evidential Deep Learning (EDL) based uncertainty estimation framework for 3D object detection.
EDL-U generates pseudo labels and quantifies the associated uncertainties.
Probability detectors trained using MEDL-U surpass deterministic detectors trained using outputs from previous 3D annotators on the KITTI val set for all difficulty levels.
arXiv Detail & Related papers (2023-09-18T09:14:03Z) - Mutual Information-calibrated Conformal Feature Fusion for
Uncertainty-Aware Multimodal 3D Object Detection at the Edge [1.7898305876314982]
Three-dimensional (3D) object detection, a critical robotics operation, has seen significant advancements.
Our study integrates the principles of conformal inference with information theoretic measures to perform lightweight, Monte Carlo-free uncertainty estimation.
The framework demonstrates comparable or better performance in KITTI 3D object detection benchmarks to similar methods that are not uncertainty-aware.
arXiv Detail & Related papers (2023-09-18T09:02:44Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation [70.75100533512021]
In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects.
We propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables.
The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors.
arXiv Detail & Related papers (2022-07-06T06:26:17Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - MLOD: Awareness of Extrinsic Perturbation in Multi-LiDAR 3D Object
Detection for Autonomous Driving [10.855519369371853]
Extrinsic perturbation always exists in multiple sensors.
We propose a multi-LiDAR 3D object detector called MLOD.
We conduct extensive experiments on a real-world dataset.
arXiv Detail & Related papers (2020-09-29T06:11:22Z) - Leveraging Uncertainties for Deep Multi-modal Object Detection in
Autonomous Driving [12.310862288230075]
This work presents a probabilistic deep neural network that combines LiDAR point clouds and RGB camera images for robust, accurate 3D object detection.
We explicitly model uncertainties in the classification and regression tasks, and leverage uncertainties to train the fusion network via a sampling mechanism.
arXiv Detail & Related papers (2020-02-01T14:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.