Reducing Overconfidence Predictions for Autonomous Driving Perception
- URL: http://arxiv.org/abs/2202.07825v1
- Date: Wed, 16 Feb 2022 01:59:55 GMT
- Title: Reducing Overconfidence Predictions for Autonomous Driving Perception
- Authors: Gledson Melotti, Cristiano Premebida, Jordan J. Bird, Diego R. Faria,
Nuno Gon\c{c}alves
- Abstract summary: We show that Maximum Likelihood (ML) and Maximum a-Posteriori (MAP) functions are more suitable for probabilistic interpretations than SoftMax and Sigmoid-based predictions for object recognition.
ML and MAP functions can be implemented in existing trained networks, that is, the approach benefits from the output of the Logit layer of pre-trained networks.
- Score: 3.1428836133120543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In state-of-the-art deep learning for object recognition, SoftMax and Sigmoid
functions are most commonly employed as the predictor outputs. Such layers
often produce overconfident predictions rather than proper probabilistic
scores, which can thus harm the decision-making of `critical' perception
systems applied in autonomous driving and robotics. Given this, the experiments
in this work propose a probabilistic approach based on distributions calculated
out of the Logit layer scores of pre-trained networks. We demonstrate that
Maximum Likelihood (ML) and Maximum a-Posteriori (MAP) functions are more
suitable for probabilistic interpretations than SoftMax and Sigmoid-based
predictions for object recognition. We explore distinct sensor modalities via
RGB images and LiDARs (RV: range-view) data from the KITTI and Lyft Level-5
datasets, where our approach shows promising performance compared to the usual
SoftMax and Sigmoid layers, with the benefit of enabling interpretable
probabilistic predictions. Another advantage of the approach introduced in this
paper is that the ML and MAP functions can be implemented in existing trained
networks, that is, the approach benefits from the output of the Logit layer of
pre-trained networks. Thus, there is no need to carry out a new training phase
since the ML and MAP functions are used in the test/prediction phase.
Related papers
- The Misclassification Likelihood Matrix: Some Classes Are More Likely To Be Misclassified Than Others [1.654278807602897]
This study introduces Misclassification Likelihood Matrix (MLM) as a novel tool for quantifying the reliability of neural network predictions under distribution shifts.
The implications of this work extend beyond image classification, with ongoing applications in autonomous systems, such as self-driving cars.
arXiv Detail & Related papers (2024-07-10T16:43:14Z) - Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - ConvBKI: Real-Time Probabilistic Semantic Mapping Network with Quantifiable Uncertainty [7.537718151195062]
We develop a modular neural network for real-time colorblack(> 10 Hz) semantic mapping in uncertain environments.
Our approach combines the reliability of classical probabilistic algorithms with the performance and efficiency of modern neural networks.
arXiv Detail & Related papers (2023-10-24T17:30:26Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - A deep learning based surrogate model for stochastic simulators [0.0]
We propose a deep learning-based surrogate model for simulators.
We utilize conditional maximum mean discrepancy (CMMD) as the loss-function.
Results obtained indicate the excellent performance of the proposed approach.
arXiv Detail & Related papers (2021-10-24T11:38:47Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Efficient Data-Dependent Learnability [8.766022970635898]
The predictive normalized maximum likelihood (pNML) approach has recently been proposed as the min-max optimal solution to the batch learning problem.
We show that when applied to neural networks, this approximation can detect out-of-distribution examples effectively.
arXiv Detail & Related papers (2020-11-20T10:44:55Z) - Probabilistic Object Classification using CNN ML-MAP layers [0.0]
We introduce a CNN probabilistic approach based on distributions calculated in the network's Logit layer.
The new approach shows promising performance compared to SoftMax.
arXiv Detail & Related papers (2020-05-29T13:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.