Investigation of Uncertainty of Deep Learning-based Object
Classification on Radar Spectra
- URL: http://arxiv.org/abs/2106.05870v1
- Date: Tue, 1 Jun 2021 09:50:19 GMT
- Title: Investigation of Uncertainty of Deep Learning-based Object
Classification on Radar Spectra
- Authors: Kanil Patel, William Beluch, Kilian Rambach, Adriana-Eliza Cozma,
Michael Pfeiffer and Bin Yang
- Abstract summary: Deep learning (DL) has attracted increasing interest to improve object type classification for automotive radar.
Current DL research has investigated how uncertainties of predictions can be quantified.
In this article, we evaluate the potential of these methods for safe, automotive radar perception.
- Score: 8.797293761152604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) has recently attracted increasing interest to improve
object type classification for automotive radar.In addition to high accuracy,
it is crucial for decision making in autonomous vehicles to evaluate the
reliability of the predictions; however, decisions of DL networks are
non-transparent. Current DL research has investigated how uncertainties of
predictions can be quantified, and in this article, we evaluate the potential
of these methods for safe, automotive radar perception. In particular we
evaluate how uncertainty quantification can support radar perception under (1)
domain shift, (2) corruptions of input signals, and (3) in the presence of
unknown objects. We find that in agreement with phenomena observed in the
literature,deep radar classifiers are overly confident, even in their wrong
predictions. This raises concerns about the use of the confidence values for
decision making under uncertainty, as the model fails to notify when it cannot
handle an unknown situation. Accurate confidence values would allow optimal
integration of multiple information sources, e.g. via sensor fusion. We show
that by applying state-of-the-art post-hoc uncertainty calibration, the quality
of confidence measures can be significantly improved,thereby partially
resolving the over-confidence problem. Our investigation shows that further
research into training and calibrating DL networks is necessary and offers
great potential for safe automotive object classification with radar sensors.
Related papers
- Predicting Safety Misbehaviours in Autonomous Driving Systems using Uncertainty Quantification [8.213390074932132]
This paper evaluates different uncertainty quantification methods from the deep learning domain for the anticipatory testing of safety-critical misbehaviours.
We compute uncertainty scores as the vehicle executes, following the intuition that high uncertainty scores are indicative of unsupported runtime conditions.
In our study, we conducted an evaluation of the effectiveness and computational overhead associated with two uncertainty quantification methods, namely MC- Dropout and Deep Ensembles, for misbehaviour avoidance.
arXiv Detail & Related papers (2024-04-29T10:28:28Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - UncertaintyTrack: Exploiting Detection and Localization Uncertainty in Multi-Object Tracking [8.645078288584305]
Multi-object tracking (MOT) methods have seen a significant boost in performance recently.
We introduce UncertaintyTrack, a collection of extensions that can be applied to multiple TBD trackers.
Experiments on the Berkeley Deep Drive MOT dataset show that the combination of our method and informative uncertainty estimates reduces the number of ID switches by around 19%.
arXiv Detail & Related papers (2024-02-19T17:27:04Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Is my Driver Observation Model Overconfident? Input-guided Calibration
Networks for Reliable and Interpretable Confidence Estimates [23.449073032842076]
Driver observation models are rarely deployed under perfect conditions.
We show that raw neural network-based approaches tend to significantly overestimate their prediction quality.
We introduce Calibrated Action Recognition with Input Guidance (CARING)-a novel approach leveraging an additional neural network to learn scaling the confidences depending on the video representation.
arXiv Detail & Related papers (2022-04-10T12:43:58Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Improving Uncertainty of Deep Learning-based Object Classification on
Radar Spectra using Label Smoothing [9.438141018800636]
We learn deep radar spectra classifiers which offer robust real-time uncertainty estimates using label smoothing during training.
In this article, we exploit radar-specific know-how to define soft labels which encourage the classifiers to learn to output high-quality uncertainty estimates.
Our investigations show how simple radar knowledge can easily be combined with complex data-driven learning algorithms to yield safe automotive radar perception.
arXiv Detail & Related papers (2021-09-27T07:49:38Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.