Inferring Spatial Uncertainty in Object Detection
- URL: http://arxiv.org/abs/2003.03644v2
- Date: Sat, 1 Aug 2020 07:11:18 GMT
- Title: Inferring Spatial Uncertainty in Object Detection
- Authors: Zining Wang, Di Feng, Yiyang Zhou, Lars Rosenbaum, Fabian Timm, Klaus
Dietmayer, Masayoshi Tomizuka and Wei Zhan
- Abstract summary: We propose a generative model to estimate bounding box label uncertainties from LiDAR point clouds.
Comprehensive experiments show that the proposed model represents uncertainties commonly seen in driving scenarios.
We propose an extension of IoU, called the Jaccard IoU (JIoU), as a new evaluation metric that incorporates label uncertainty.
- Score: 35.28872968233385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The availability of real-world datasets is the prerequisite for developing
object detection methods for autonomous driving. While ambiguity exists in
object labels due to error-prone annotation process or sensor observation
noises, current object detection datasets only provide deterministic
annotations without considering their uncertainty. This precludes an in-depth
evaluation among different object detection methods, especially for those that
explicitly model predictive probability. In this work, we propose a generative
model to estimate bounding box label uncertainties from LiDAR point clouds, and
define a new representation of the probabilistic bounding box through spatial
distribution. Comprehensive experiments show that the proposed model represents
uncertainties commonly seen in driving scenarios. Based on the spatial
distribution, we further propose an extension of IoU, called the Jaccard IoU
(JIoU), as a new evaluation metric that incorporates label uncertainty.
Experiments on the KITTI and the Waymo Open Datasets show that JIoU is superior
to IoU when evaluating probabilistic object detectors.
Related papers
- Credible Teacher for Semi-Supervised Object Detection in Open Scene [106.25850299007674]
In Open Scene Semi-Supervised Object Detection (O-SSOD), unlabeled data may contain unknown objects not observed in the labeled data.
It is detrimental to the current methods that mainly rely on self-training, as more uncertainty leads to the lower localization and classification precision of pseudo labels.
We propose Credible Teacher, an end-to-end framework to prevent uncertain pseudo labels from misleading the model.
arXiv Detail & Related papers (2024-01-01T08:19:21Z) - GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation [70.75100533512021]
In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects.
We propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables.
The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors.
arXiv Detail & Related papers (2022-07-06T06:26:17Z) - Object Detection as Probabilistic Set Prediction [3.7599363231894176]
We present a proper scoring rule for evaluating and training probabilistic object detectors.
Our results indicate that the training of existing detectors is optimized toward non-probabilistic metrics.
arXiv Detail & Related papers (2022-03-15T15:13:52Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Trajectory Forecasting from Detection with Uncertainty-Aware Motion
Encoding [121.66374635092097]
Trajectories obtained from object detection and tracking are inevitably noisy.
We propose a trajectory predictor directly based on detection results without relying on explicitly formed trajectories.
arXiv Detail & Related papers (2022-02-03T09:09:56Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Labels Are Not Perfect: Inferring Spatial Uncertainty in Object
Detection [26.008419879970365]
In this work, we infer the uncertainty in bounding box labels from LiDAR point clouds based on a generative model.
Comprehensive experiments show that the proposed model reflects complex environmental noises in LiDAR perception and the label quality.
We propose Jaccard IoU as a new evaluation metric that extends IoU by incorporating label uncertainty.
arXiv Detail & Related papers (2020-12-18T09:11:44Z) - A Review and Comparative Study on Probabilistic Object Detection in
Autonomous Driving [14.034548457000884]
Capturing uncertainty in object detection is indispensable for safe autonomous driving.
There is no summary on uncertainty estimation in deep object detection.
This paper provides a review and comparative study on existing probabilistic object detection methods.
arXiv Detail & Related papers (2020-11-20T22:30:36Z) - Labels Are Not Perfect: Improving Probabilistic Object Detection via
Label Uncertainty [12.531126969367774]
We leverage our previously proposed method for estimating uncertainty inherent in ground truth bounding box parameters.
Experimental results on the KITTI dataset show that our method surpasses both the baseline model and the models based on simple uncertaintys by up to 3.6% in terms of Average Precision.
arXiv Detail & Related papers (2020-08-10T14:49:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.