Uncertainty-Aware Voxel based 3D Object Detection and Tracking with
von-Mises Loss
- URL: http://arxiv.org/abs/2011.02553v1
- Date: Wed, 4 Nov 2020 21:53:31 GMT
- Title: Uncertainty-Aware Voxel based 3D Object Detection and Tracking with
von-Mises Loss
- Authors: Yuanxin Zhong, Minghan Zhu and Huei Peng
- Abstract summary: Uncertainty helps us tackle the error in the perception system and improve robustness.
We propose a method for improving target tracking performance by adding uncertainty regression to the SECOND detector.
- Score: 13.346392746224117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection and tracking is a key task in autonomy. Specifically, 3D
object detection and tracking have been an emerging hot topic recently.
Although various methods have been proposed for object detection, uncertainty
in the 3D detection and tracking tasks has been less explored. Uncertainty
helps us tackle the error in the perception system and improve robustness. In
this paper, we propose a method for improving target tracking performance by
adding uncertainty regression to the SECOND detector, which is one of the most
representative algorithms of 3D object detection. Our method estimates
positional and dimensional uncertainties with Gaussian Negative Log-Likelihood
(NLL) Loss for estimation and introduces von-Mises NLL Loss for angular
uncertainty estimation. We fed the uncertainty output into a classical object
tracking framework and proved that our method increased the tracking
performance compared against the vanilla tracker with constant covariance
assumption.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Harnessing Uncertainty-aware Bounding Boxes for Unsupervised 3D Object Detection [22.297964850282177]
Unsupervised 3D object detection aims to identify objects of interest from unlabeled raw data, such as LiDAR points.
Recent approaches usually adopt pseudo 3D bounding boxes (3D bboxes) from clustering algorithm to initialize the model training.
We introduce a new uncertainty-aware framework for unsupervised 3D object detection, dubbed UA3D.
arXiv Detail & Related papers (2024-08-01T15:01:07Z) - UA-Track: Uncertainty-Aware End-to-End 3D Multi-Object Tracking [37.857915442467316]
3D multiple object tracking (MOT) plays a crucial role in autonomous driving perception.
Recent end-to-end query-based trackers simultaneously detect and track objects, which have shown promising potential for the 3D MOT task.
Existing methods overlook the uncertainty issue, which refers to the lack of precise confidence about the state and location of tracked objects.
We propose an Uncertainty-Aware 3D MOT framework, UA-Track, which tackles the uncertainty problem from multiple aspects.
arXiv Detail & Related papers (2024-06-04T09:34:46Z) - UncertaintyTrack: Exploiting Detection and Localization Uncertainty in Multi-Object Tracking [8.645078288584305]
Multi-object tracking (MOT) methods have seen a significant boost in performance recently.
We introduce UncertaintyTrack, a collection of extensions that can be applied to multiple TBD trackers.
Experiments on the Berkeley Deep Drive MOT dataset show that the combination of our method and informative uncertainty estimates reduces the number of ID switches by around 19%.
arXiv Detail & Related papers (2024-02-19T17:27:04Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation [70.75100533512021]
In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects.
We propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables.
The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors.
arXiv Detail & Related papers (2022-07-06T06:26:17Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Delving into Localization Errors for Monocular 3D Object Detection [85.77319416168362]
Estimating 3D bounding boxes from monocular images is an essential component in autonomous driving.
In this work, we quantify the impact introduced by each sub-task and find the localization error' is the vital factor in restricting monocular 3D detection.
arXiv Detail & Related papers (2021-03-30T10:38:01Z) - Detecting Invisible People [58.49425715635312]
We re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects.
We demonstrate that current detection and tracking systems perform dramatically worse on this task.
Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks.
arXiv Detail & Related papers (2020-12-15T16:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.