MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential
Deep Learning
- URL: http://arxiv.org/abs/2309.09599v3
- Date: Thu, 15 Feb 2024 14:48:08 GMT
- Title: MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential
Deep Learning
- Authors: Helbert Paat, Qing Lian, Weilong Yao, Tong Zhang
- Abstract summary: We introduce an Evidential Deep Learning (EDL) based uncertainty estimation framework for 3D object detection.
EDL-U generates pseudo labels and quantifies the associated uncertainties.
Probability detectors trained using MEDL-U surpass deterministic detectors trained using outputs from previous 3D annotators on the KITTI val set for all difficulty levels.
- Score: 13.59039985176011
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advancements in deep learning-based 3D object detection necessitate the
availability of large-scale datasets. However, this requirement introduces the
challenge of manual annotation, which is often both burdensome and
time-consuming. To tackle this issue, the literature has seen the emergence of
several weakly supervised frameworks for 3D object detection which can
automatically generate pseudo labels for unlabeled data. Nevertheless, these
generated pseudo labels contain noise and are not as accurate as those labeled
by humans. In this paper, we present the first approach that addresses the
inherent ambiguities present in pseudo labels by introducing an Evidential Deep
Learning (EDL) based uncertainty estimation framework. Specifically, we propose
MEDL-U, an EDL framework based on MTrans, which not only generates pseudo
labels but also quantifies the associated uncertainties. However, applying EDL
to 3D object detection presents three primary challenges: (1) relatively lower
pseudolabel quality in comparison to other autolabelers; (2) excessively high
evidential uncertainty estimates; and (3) lack of clear interpretability and
effective utilization of uncertainties for downstream tasks. We tackle these
issues through the introduction of an uncertainty-aware IoU-based loss, an
evidence-aware multi-task loss function, and the implementation of a
post-processing stage for uncertainty refinement. Our experimental results
demonstrate that probabilistic detectors trained using the outputs of MEDL-U
surpass deterministic detectors trained using outputs from previous 3D
annotators on the KITTI val set for all difficulty levels. Moreover, MEDL-U
achieves state-of-the-art results on the KITTI official test set compared to
existing 3D automatic annotators.
Related papers
- Dual-Perspective Knowledge Enrichment for Semi-Supervised 3D Object
Detection [55.210991151015534]
We present a novel Dual-Perspective Knowledge Enrichment approach named DPKE for semi-supervised 3D object detection.
Our DPKE enriches the knowledge of limited training data, particularly unlabeled data, from two perspectives: data-perspective and feature-perspective.
arXiv Detail & Related papers (2024-01-10T08:56:07Z) - DDS3D: Dense Pseudo-Labels with Dynamic Threshold for Semi-Supervised 3D
Object Detection [15.440609044002722]
We present a simple yet effective semi-supervised 3D object detector named3D.
Benefiting from these two components, our3D outperforms the state-of-the-art semi-supervised 3d object detection with mAP of 3.1% on the dataset and 2.1% on the cyclist.
arXiv Detail & Related papers (2023-03-09T07:30:53Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation [70.75100533512021]
In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects.
We propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables.
The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors.
arXiv Detail & Related papers (2022-07-06T06:26:17Z) - ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D
Object Detection [78.71826145162092]
We present a self-training method, named ST3D++, with a holistic pseudo label denoising pipeline for unsupervised domain adaptation on 3D object detection.
We equip the pseudo label generation process with a hybrid quality-aware triplet memory to improve the quality and stability of generated pseudo labels.
In the model training stage, we propose a source data assisted training strategy and a curriculum data augmentation policy.
arXiv Detail & Related papers (2021-08-15T07:49:06Z) - 3DIoUMatch: Leveraging IoU Prediction for Semi-Supervised 3D Object
Detection [76.42897462051067]
3DIoUMatch is a novel semi-supervised method for 3D object detection applicable to both indoor and outdoor scenes.
We leverage a teacher-student mutual learning framework to propagate information from the labeled to the unlabeled train set in the form of pseudo-labels.
Our method consistently improves state-of-the-art methods on both ScanNet and SUN-RGBD benchmarks by significant margins under all label ratios.
arXiv Detail & Related papers (2020-12-08T11:06:26Z) - Uncertainty-Aware Voxel based 3D Object Detection and Tracking with
von-Mises Loss [13.346392746224117]
Uncertainty helps us tackle the error in the perception system and improve robustness.
We propose a method for improving target tracking performance by adding uncertainty regression to the SECOND detector.
arXiv Detail & Related papers (2020-11-04T21:53:31Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.