Detecting OODs as datapoints with High Uncertainty
- URL: http://arxiv.org/abs/2108.06380v1
- Date: Fri, 13 Aug 2021 20:07:42 GMT
- Title: Detecting OODs as datapoints with High Uncertainty
- Authors: Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky,
Insup Lee
- Abstract summary: Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution inputs (OODs)
This limitation is one of the key challenges in the adoption of DNNs in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis.
Several techniques have been developed to detect inputs where the model's prediction cannot be trusted.
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric)
- Score: 12.040347694782007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are known to produce incorrect predictions with
very high confidence on out-of-distribution inputs (OODs). This limitation is
one of the key challenges in the adoption of DNNs in high-assurance systems
such as autonomous driving, air traffic management, and medical diagnosis. This
challenge has received significant attention recently, and several techniques
have been developed to detect inputs where the model's prediction cannot be
trusted. These techniques detect OODs as datapoints with either high epistemic
uncertainty or high aleatoric uncertainty. We demonstrate the difference in the
detection ability of these techniques and propose an ensemble approach for
detection of OODs as datapoints with high uncertainty (epistemic or aleatoric).
We perform experiments on vision datasets with multiple DNN architectures,
achieving state-of-the-art results in most cases.
Related papers
- Mitigating Overconfidence in Out-of-Distribution Detection by Capturing Extreme Activations [1.8531577178922987]
"Overconfidence" is an intrinsic property of certain neural network architectures, leading to poor OOD detection.
We measure extreme activation values in the penultimate layer of neural networks and then leverage this proxy of overconfidence to improve on several OOD detection baselines.
Compared to the baselines, our method often grants substantial improvements, with double-digit increases in OOD detection.
arXiv Detail & Related papers (2024-05-21T10:14:50Z) - Generalized Out-of-Distribution Detection: A Survey [83.0449593806175]
Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems.
Several other problems, including anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD) are closely related to OOD detection.
We first present a unified framework called generalized OOD detection, which encompasses the five aforementioned problems.
arXiv Detail & Related papers (2021-10-21T17:59:41Z) - Confidence Aware Neural Networks for Skin Cancer Detection [12.300911283520719]
We present three different methods for quantifying uncertainties for skin cancer detection from images.
The obtained results reveal that the predictive uncertainty estimation methods are capable of flagging risky and erroneous predictions.
We also demonstrate that ensemble approaches are more reliable in capturing uncertainties through inference.
arXiv Detail & Related papers (2021-07-19T19:21:57Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Interval Deep Learning for Uncertainty Quantification in Safety
Applications [0.0]
Current deep neural networks (DNNs) do not have an implicit mechanism to quantify and propagate significant input data uncertainty.
We present a DNN optimized with gradient-based methods capable to quantify input and parameter uncertainty by means of interval analysis.
We show that the Deep Interval Neural Network (DINN) can produce accurate bounded estimates from uncertain input data.
arXiv Detail & Related papers (2021-05-13T17:21:33Z) - Are all outliers alike? On Understanding the Diversity of Outliers for
Detecting OODs [11.211251493663267]
This paper presents a taxonomy of OOD outlier inputs based on their source and nature of uncertainty.
We develop a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers.
arXiv Detail & Related papers (2021-03-23T15:33:58Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [79.18710225716791]
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
arXiv Detail & Related papers (2020-06-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.