iDECODe: In-distribution Equivariance for Conformal Out-of-distribution
Detection
- URL: http://arxiv.org/abs/2201.02331v1
- Date: Fri, 7 Jan 2022 05:21:40 GMT
- Title: iDECODe: In-distribution Equivariance for Conformal Out-of-distribution
Detection
- Authors: Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Edgar Dobriban,
Oleg Sokolsky, Insup Lee
- Abstract summary: Machine learning methods such as deep neural networks (DNNs) often generate incorrect predictions with high confidence.
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
We demonstrate the efficacy of iDECODe by experiments on image and audio datasets, obtaining state-of-the-art results.
- Score: 24.518698391381204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning methods such as deep neural networks (DNNs), despite their
success across different domains, are known to often generate incorrect
predictions with high confidence on inputs outside their training distribution.
The deployment of DNNs in safety-critical domains requires detection of
out-of-distribution (OOD) data so that DNNs can abstain from making predictions
on those. A number of methods have been recently developed for OOD detection,
but there is still room for improvement. We propose the new method iDECODe,
leveraging in-distribution equivariance for conformal OOD detection. It relies
on a novel base non-conformity measure and a new aggregation method, used in
the inductive conformal anomaly detection framework, thereby guaranteeing a
bounded false detection rate. We demonstrate the efficacy of iDECODe by
experiments on image and audio datasets, obtaining state-of-the-art results. We
also show that iDECODe can detect adversarial examples.
Related papers
- Hypothesis-Driven Deep Learning for Out of Distribution Detection [0.8191518216608217]
We propose a hypothesis-driven approach to quantify whether a new sample is InD or OoD.
We adapt our method to detect an unseen sample of bacteria to a trained deep learning model, and show that it reveals interpretable differences between InD and OoD latent responses.
arXiv Detail & Related papers (2024-03-21T01:06:47Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Window-Based Distribution Shift Detection for Deep Neural Networks [21.73028341299301]
We study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data.
Using selective prediction principles, we propose a distribution deviation detection method for DNNs.
Our novel detection method performs on-par or better than the state-of-the-art, while consuming substantially lower time.
arXiv Detail & Related papers (2022-10-19T21:27:25Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Model2Detector:Widening the Information Bottleneck for
Out-of-Distribution Detection using a Handful of Gradient Steps [12.263417500077383]
Out-of-distribution detection is an important capability that has long eluded vanilla neural networks.
Recent advances in inference-time out-of-distribution detection help mitigate some of these problems.
We show how our method consistently outperforms the state-of-the-art in detection accuracy on popular image datasets.
arXiv Detail & Related papers (2022-02-22T23:03:40Z) - Out-of-Distribution Detection using Outlier Detection Methods [0.0]
Out-of-distribution detection (OOD) deals with anomalous input to neural networks.
We use outlier detection algorithms to detect anomalous input as reliable as specialized methods from the field of OOD.
No neural network adaptation is required; detection is based on the model's softmax score.
arXiv Detail & Related papers (2021-08-18T16:05:53Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Statistical Testing for Efficient Out of Distribution Detection in Deep
Neural Networks [26.0303701309125]
This paper frames the Out Of Distribution (OOD) detection problem in Deep Neural Networks as a statistical hypothesis testing problem.
We build on this framework to suggest a novel OOD procedure based on low-order statistics.
Our method achieves comparable or better than state-of-the-art results on well-accepted OOD benchmarks without retraining the network parameters.
arXiv Detail & Related papers (2021-02-25T16:14:47Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [79.18710225716791]
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
arXiv Detail & Related papers (2020-06-11T17:39:07Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.