Unified Out-Of-Distribution Detection: A Model-Specific Perspective
- URL: http://arxiv.org/abs/2304.06813v2
- Date: Fri, 3 Nov 2023 18:03:29 GMT
- Title: Unified Out-Of-Distribution Detection: A Model-Specific Perspective
- Authors: Reza Averly, Wei-Lun Chao
- Abstract summary: Out-of-distribution (OOD) detection aims to identify test examples that do not belong to the training distribution.
We present a novel, unifying framework to study OOD detection in a broader scope.
- Score: 31.68704233156108
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Out-of-distribution (OOD) detection aims to identify test examples that do
not belong to the training distribution and are thus unlikely to be predicted
reliably. Despite a plethora of existing works, most of them focused only on
the scenario where OOD examples come from semantic shift (e.g., unseen
categories), ignoring other possible causes (e.g., covariate shift). In this
paper, we present a novel, unifying framework to study OOD detection in a
broader scope. Instead of detecting OOD examples from a particular cause, we
propose to detect examples that a deployed machine learning model (e.g., an
image classifier) is unable to predict correctly. That is, whether a test
example should be detected and rejected or not is ``model-specific''. We show
that this framework unifies the detection of OOD examples caused by semantic
shift and covariate shift, and closely addresses the concern of applying a
machine learning model to uncontrolled environments. We provide an extensive
analysis that involves a variety of models (e.g., different architectures and
training strategies), sources of OOD examples, and OOD detection approaches,
and reveal several insights into improving and understanding OOD detection in
uncontrolled environments.
Related papers
- Towards More Trustworthy Deep Code Models by Enabling Out-of-Distribution Detection [12.141246816152288]
We develop two types of SE-specific OOD detection models, unsupervised and weakly-supervised OOD detection for code.
Our proposed methods significantly outperform the baselines in detecting OOD samples from four different scenarios simultaneously and also positively impact a main code understanding task.
arXiv Detail & Related papers (2025-02-26T06:59:53Z) - A Closer Look at the Learnability of Out-of-Distribution (OOD) Detection [25.788559173418363]
We characterize under what conditions OOD detection is uniformly and non-uniformly learnable.
We show that in several cases, non-uniform learnability turns a number of negative results into positive.
In all cases where OOD detection is learnable, we provide concrete learning algorithms and a sample-complexity analysis.
arXiv Detail & Related papers (2025-01-15T14:19:03Z) - Semantic or Covariate? A Study on the Intractable Case of Out-of-Distribution Detection [70.57120710151105]
We provide a more precise definition of the Semantic Space for the ID distribution.
We also define the "Tractable OOD" setting which ensures the distinguishability of OOD and ID distributions.
arXiv Detail & Related papers (2024-11-18T03:09:39Z) - The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection [9.656342063882555]
We study five types of distribution shifts and evaluate the performance of recent OOD detection methods on each of them.
Our findings reveal that while these methods excel in detecting unknown classes, their performance is inconsistent when encountering other types of distribution shifts.
We present an ensemble approach that offers a more consistent and comprehensive solution for broad OOD detection.
arXiv Detail & Related papers (2023-08-22T14:52:44Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - SR-OOD: Out-of-Distribution Detection via Sample Repairing [48.272537939227206]
Out-of-distribution (OOD) detection is a crucial task for ensuring the reliability and robustness of machine learning models.
Recent works have shown that generative models often assign high confidence scores to OOD samples, indicating that they fail to capture the semantic information of the data.
We take advantage of sample repairing and propose a novel OOD detection framework, namely SR-OOD.
Our framework achieves superior performance over the state-of-the-art generative methods in OOD detection.
arXiv Detail & Related papers (2023-05-26T16:35:20Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z) - Detecting Out-of-Distribution Examples with In-distribution Examples and
Gram Matrices [8.611328447624679]
Deep neural networks yield confident, incorrect predictions when presented with Out-of-Distribution examples.
In this paper, we propose to detect OOD examples by identifying inconsistencies between activity patterns and class predicted.
We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.
arXiv Detail & Related papers (2019-12-28T19:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.