Ensemble Quantile Networks: Uncertainty-Aware Reinforcement Learning
with Applications in Autonomous Driving
- URL: http://arxiv.org/abs/2105.10266v1
- Date: Fri, 21 May 2021 10:36:16 GMT
- Title: Ensemble Quantile Networks: Uncertainty-Aware Reinforcement Learning
with Applications in Autonomous Driving
- Authors: Carl-Johan Hoel, Krister Wolff, Leo Laine
- Abstract summary: Reinforcement learning can be used to create a decision-making agent for autonomous driving.
Previous approaches provide only black-box solutions, which do not offer information on how confident the agent is about its decisions.
This paper introduces the Ensemble Quantile Networks (EQN) method, which combines distributional RL with an ensemble approach to obtain a complete uncertainty estimate.
- Score: 1.6758573326215689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) can be used to create a decision-making agent for
autonomous driving. However, previous approaches provide only black-box
solutions, which do not offer information on how confident the agent is about
its decisions. An estimate of both the aleatoric and epistemic uncertainty of
the agent's decisions is fundamental for real-world applications of autonomous
driving. Therefore, this paper introduces the Ensemble Quantile Networks (EQN)
method, which combines distributional RL with an ensemble approach, to obtain a
complete uncertainty estimate. The distribution over returns is estimated by
learning its quantile function implicitly, which gives the aleatoric
uncertainty, whereas an ensemble of agents is trained on bootstrapped data to
provide a Bayesian estimation of the epistemic uncertainty. A criterion for
classifying which decisions that have an unacceptable uncertainty is also
introduced. The results show that the EQN method can balance risk and time
efficiency in different occluded intersection scenarios, by considering the
estimated aleatoric uncertainty. Furthermore, it is shown that the trained
agent can use the epistemic uncertainty information to identify situations that
the agent has not been trained for and thereby avoid making unfounded,
potentially dangerous, decisions outside of the training distribution.
Related papers
- Explainability through uncertainty: Trustworthy decision-making with neural networks [1.104960878651584]
Uncertainty is a key feature of any machine learning model.
It is particularly important in neural networks, which tend to be overconfident.
Uncertainty as XAI improves the model's trustworthiness in downstream decision-making tasks.
arXiv Detail & Related papers (2024-03-15T10:22:48Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Enabling risk-aware Reinforcement Learning for medical interventions
through uncertainty decomposition [9.208828373290487]
Reinforcement Learning (RL) is emerging as tool for tackling complex control and decision-making problems.
It is often challenging to bridge the gap between an apparently optimal policy learnt by an agent and its real-world deployment.
Here we propose how a distributional approach (UA-DQN) can be recast to render uncertainties by decomposing the net effects of each uncertainty.
arXiv Detail & Related papers (2021-09-16T09:36:53Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Reinforcement Learning with Uncertainty Estimation for Tactical
Decision-Making in Intersections [0.0]
This paper investigates how a Bayesian reinforcement learning method can be used to create a tactical decision-making agent for autonomous driving.
An ensemble of neural networks, with additional randomized prior functions (RPF), are trained by using a bootstrapped experience replay memory.
It is shown that the trained ensemble RPF agent can detect cases with high uncertainty, both in situations that are far from the training distribution, and in situations that seldom occur within the training distribution.
arXiv Detail & Related papers (2020-06-17T11:29:26Z) - Tactical Decision-Making in Autonomous Driving by Reinforcement Learning
with Uncertainty Estimation [0.9883261192383611]
Reinforcement learning can be used to create a tactical decision-making agent for autonomous driving.
This paper investigates how a Bayesian RL technique can be used to estimate the uncertainty of decisions in autonomous driving.
arXiv Detail & Related papers (2020-04-22T08:22:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.