Uncertainty-Based Out-of-Distribution Classification in Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2001.00496v1
- Date: Tue, 31 Dec 2019 09:52:49 GMT
- Title: Uncertainty-Based Out-of-Distribution Classification in Deep
Reinforcement Learning
- Authors: Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia
Linnhoff-Popien
- Abstract summary: Wrong predictions for out-of-distribution data can cause safety critical situations in machine learning systems.
We propose a framework for uncertainty-based OOD classification: UBOOD.
We show that UBOOD produces reliable classification results when combined with ensemble-based estimators.
- Score: 17.10036674236381
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robustness to out-of-distribution (OOD) data is an important goal in building
reliable machine learning systems. Especially in autonomous systems, wrong
predictions for OOD inputs can cause safety critical situations. As a first
step towards a solution, we consider the problem of detecting such data in a
value-based deep reinforcement learning (RL) setting. Modelling this problem as
a one-class classification problem, we propose a framework for
uncertainty-based OOD classification: UBOOD. It is based on the effect that an
agent's epistemic uncertainty is reduced for situations encountered during
training (in-distribution), and thus lower than for unencountered (OOD)
situations. Being agnostic towards the approach used for estimating epistemic
uncertainty, combinations with different uncertainty estimation methods, e.g.
approximate Bayesian inference methods or ensembling techniques are possible.
We further present a first viable solution for calculating a dynamic
classification threshold, based on the uncertainty distribution of the training
data. Evaluation shows that the framework produces reliable classification
results when combined with ensemble-based estimators, while the combination
with concrete dropout-based estimators fails to reliably detect OOD situations.
In summary, UBOOD presents a viable approach for OOD classification in deep RL
settings by leveraging the epistemic uncertainty of the agent's value function.
Related papers
- Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - Large Class Separation is not what you need for Relational
Reasoning-based OOD Detection [12.578844450586]
Out-Of-Distribution (OOD) detection methods provide a solution by identifying semantic novelty.
Most of these methods leverage a learning stage on the known data, which means training (or fine-tuning) a model to capture the concept of normality.
A viable alternative is that of evaluating similarities in the embedding space produced by large pre-trained models without any further learning effort.
arXiv Detail & Related papers (2023-07-12T14:10:15Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Improving Out-of-Distribution Detection via Epistemic Uncertainty
Adversarial Training [29.4569172720654]
We develop a simple adversarial training scheme that incorporates an attack of the uncertainty predicted by the dropout ensemble.
We demonstrate this method improves OOD detection performance on standard data (i.e., not adversarially crafted), and improves the standardized partial AUC from near-random guessing performance to $geq 0.75$.
arXiv Detail & Related papers (2022-09-05T14:32:19Z) - An Uncertainty-Informed Framework for Trustworthy Fault Diagnosis in
Safety-Critical Applications [1.988145627448243]
Low trustworthiness of deep learning-based prognostic and health management (PHM) hinders its applications in safety-critical assets.
We propose an uncertainty-informed framework to diagnose faults and meanwhile detect the OOD dataset.
We show that the proposed framework is of particular advantage in tackling unknowns and enhancing the trustworthiness of fault diagnosis in safety-critical applications.
arXiv Detail & Related papers (2021-10-08T21:24:14Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at
Reliable OOD Detection [0.0]
This paper gives a theoretical explanation for said experimental findings and illustrates it on synthetic data.
We prove that such techniques are not able to reliably identify OOD samples in a classification setting.
arXiv Detail & Related papers (2020-12-09T21:35:55Z) - Certifiably Adversarially Robust Detection of Out-of-Distribution Data [111.67388500330273]
We aim for certifiable worst case guarantees for OOD detection by enforcing low confidence at the OOD point.
We show that non-trivial bounds on the confidence for OOD data generalizing beyond the OOD dataset seen at training time are possible.
arXiv Detail & Related papers (2020-07-16T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.