Contrastive Training for Improved Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2007.05566v1
- Date: Fri, 10 Jul 2020 18:40:37 GMT
- Title: Contrastive Training for Improved Out-of-Distribution Detection
- Authors: Jim Winkens, Rudy Bunel, Abhijit Guha Roy, Robert Stanforth, Vivek
Natarajan, Joseph R. Ledsam, Patricia MacWilliams, Pushmeet Kohli, Alan
Karthikesalingam, Simon Kohl, Taylan Cemgil, S. M. Ali Eslami and Olaf
Ronneberger
- Abstract summary: This paper proposes and investigates the use of contrastive training to boost OOD detection performance.
We show in extensive experiments that contrastive training significantly helps OOD detection performance on a number of common benchmarks.
- Score: 36.61315534166451
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reliable detection of out-of-distribution (OOD) inputs is increasingly
understood to be a precondition for deployment of machine learning systems.
This paper proposes and investigates the use of contrastive training to boost
OOD detection performance. Unlike leading methods for OOD detection, our
approach does not require access to examples labeled explicitly as OOD, which
can be difficult to collect in practice. We show in extensive experiments that
contrastive training significantly helps OOD detection performance on a number
of common benchmarks. By introducing and employing the Confusion Log
Probability (CLP) score, which quantifies the difficulty of the OOD detection
task by capturing the similarity of inlier and outlier datasets, we show that
our method especially improves performance in the `near OOD' classes -- a
particularly challenging setting for previous methods.
Related papers
- Margin-bounded Confidence Scores for Out-of-Distribution Detection [2.373572816573706]
We propose a novel method called Margin bounded Confidence Scores (MaCS) to address the nontrivial OOD detection problem.
MaCS enlarges the disparity between ID and OOD scores, which in turn makes the decision boundary more compact.
Experiments on various benchmark datasets for image classification tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-09-22T05:40:25Z) - OAML: Outlier Aware Metric Learning for OOD Detection Enhancement [5.357756138014614]
Out-of-distribution (OOD) detection methods have been developed to identify objects that a model has not seen during training.
The Outlier Exposure (OE) methods use auxiliary datasets to train OOD detectors directly.
We propose the Outlier Aware Metric Learning (OAML) framework to tackle the collection and learning of representative OOD samples.
arXiv Detail & Related papers (2024-06-24T11:01:43Z) - WeiPer: OOD Detection using Weight Perturbations of Class Projections [11.130659240045544]
We introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input.
We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework.
arXiv Detail & Related papers (2024-05-27T13:38:28Z) - Can Pre-trained Networks Detect Familiar Out-of-Distribution Data? [37.36999826208225]
We study the effect of PT-OOD on the OOD detection performance of pre-trained networks.
We find that the low linear separability of PT-OOD in the feature space heavily degrades the PT-OOD detection performance.
We propose a unique solution to large-scale pre-trained models: Leveraging powerful instance-by-instance discriminative representations of pre-trained models.
arXiv Detail & Related papers (2023-10-02T02:01:00Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - MOOD: Multi-level Out-of-distribution Detection [13.207044902083057]
Out-of-distribution (OOD) detection is essential to prevent anomalous inputs from causing a model to fail during deployment.
We propose a novel framework, multi-level out-of-distribution detection MOOD, which exploits intermediate classifier outputs for dynamic and efficient OOD inference.
MOOD achieves up to 71.05% computational reduction in inference, while maintaining competitive OOD detection performance.
arXiv Detail & Related papers (2021-04-30T02:18:31Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.