Understanding the properties and limitations of contrastive learning for
Out-of-Distribution detection
- URL: http://arxiv.org/abs/2211.03183v1
- Date: Sun, 6 Nov 2022 17:33:29 GMT
- Title: Understanding the properties and limitations of contrastive learning for
Out-of-Distribution detection
- Authors: Nawid Keshtmand, Raul Santos-Rodriguez, Jonathan Lawry
- Abstract summary: A popular approach to out-of-distribution (OOD) detection is based on a self-supervised learning technique referred to as contrastive learning.
In this paper, we aim to understand the effectiveness and limitation of existing contrastive learning methods for OOD detection.
- Score: 3.2689702143620143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A recent popular approach to out-of-distribution (OOD) detection is based on
a self-supervised learning technique referred to as contrastive learning. There
are two main variants of contrastive learning, namely instance and class
discrimination, targeting features that can discriminate between different
instances for the former, and different classes for the latter.
In this paper, we aim to understand the effectiveness and limitation of
existing contrastive learning methods for OOD detection. We approach this in 3
ways. First, we systematically study the performance difference between the
instance discrimination and supervised contrastive learning variants in
different OOD detection settings. Second, we study which in-distribution (ID)
classes OOD data tend to be classified into. Finally, we study the spectral
decay property of the different contrastive learning approaches and examine how
it correlates with OOD detection performance. In scenarios where the ID and OOD
datasets are sufficiently different from one another, we see that instance
discrimination, in the absence of fine-tuning, is competitive with supervised
approaches in OOD detection. We see that OOD samples tend to be classified into
classes that have a distribution similar to the distribution of the entire
dataset. Furthermore, we show that contrastive learning learns a feature space
that contains singular vectors containing several directions with a high
variance which can be detrimental or beneficial to OOD detection depending on
the inference approach used.
Related papers
- Collaborative Feature-Logits Contrastive Learning for Open-Set Semi-Supervised Object Detection [75.02249869573994]
In open-set scenarios, the unlabeled dataset contains both in-distribution (ID) classes and out-of-distribution (OOD) classes.
Applying semi-supervised detectors in such settings can lead to misclassifying OOD class as ID classes.
We propose a simple yet effective method, termed Collaborative Feature-Logits Detector (CFL-Detector)
arXiv Detail & Related papers (2024-11-20T02:57:35Z) - Semantic or Covariate? A Study on the Intractable Case of Out-of-Distribution Detection [70.57120710151105]
We provide a more precise definition of the Semantic Space for the ID distribution.
We also define the "Tractable OOD" setting which ensures the distinguishability of OOD and ID distributions.
arXiv Detail & Related papers (2024-11-18T03:09:39Z) - What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - Cluster-aware Contrastive Learning for Unsupervised Out-of-distribution
Detection [0.0]
Unsupervised out-of-distribution (OOD) Detection aims to separate the samples falling outside the distribution of training data without label information.
We propose Cluster-aware Contrastive Learning (CCL) framework for unsupervised OOD detection, which considers both instance-level and semantic-level information.
arXiv Detail & Related papers (2023-02-06T07:21:03Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Multiple Testing Framework for Out-of-Distribution Detection [27.248375922343616]
We study the problem of Out-of-Distribution (OOD) detection, that is, detecting whether a learning algorithm's output can be trusted at inference time.
We propose a definition for the notion of OOD that includes both the input distribution and the learning algorithm, which provides insights for the construction of powerful tests for OOD detection.
arXiv Detail & Related papers (2022-06-20T00:56:01Z) - Modeling Discriminative Representations for Out-of-Domain Detection with
Supervised Contrastive Learning [16.77134235390429]
Key challenge of OOD detection is to learn discriminative semantic features.
We propose a supervised contrastive learning objective to minimize intra-class variance.
We employ an adversarial augmentation mechanism to obtain pseudo diverse views of a sample.
arXiv Detail & Related papers (2021-05-29T12:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.