Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition
- URL: http://arxiv.org/abs/2207.01160v1
- Date: Mon, 4 Jul 2022 01:53:07 GMT
- Title: Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition
- Authors: Haotao Wang, Aston Zhang, Yi Zhu, Shuai Zheng, Mu Li, Alex Smola,
Zhangyang Wang
- Abstract summary: We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
- Score: 80.07843757970923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing out-of-distribution (OOD) detection methods are typically
benchmarked on training sets with balanced class distributions. However, in
real-world applications, it is common for the training sets to have long-tailed
distributions. In this work, we first demonstrate that existing OOD detection
methods commonly suffer from significant performance degradation when the
training set is long-tail distributed. Through analysis, we posit that this is
because the models struggle to distinguish the minority tail-class
in-distribution samples, from the true OOD samples, making the tail classes
more prone to be falsely detected as OOD. To solve this problem, we propose
Partial and Asymmetric Supervised Contrastive Learning (PASCL), which
explicitly encourages the model to distinguish between tail-class
in-distribution samples and OOD samples. To further boost in-distribution
classification accuracy, we propose Auxiliary Branch Finetuning, which uses two
separate branches of BN and classification layers for anomaly detection and
in-distribution classification, respectively. The intuition is that
in-distribution and OOD anomaly data have different underlying distributions.
Our method outperforms previous state-of-the-art method by $1.29\%$, $1.45\%$,
$0.69\%$ anomaly detection false positive rate (FPR) and $3.24\%$, $4.06\%$,
$7.89\%$ in-distribution classification accuracy on CIFAR10-LT, CIFAR100-LT,
and ImageNet-LT, respectively. Code and pre-trained models are available at
https://github.com/amazon-research/long-tailed-ood-detection.
Related papers
- Long-Tailed Out-of-Distribution Detection via Normalized Outlier Distribution Adaptation [24.216526107669345]
Key challenge in Out-of-Distribution (OOD) detection is the absence of ground-truth OOD samples during training.
We propose normalized outlier distribution adaptation (AdaptOD) to tackle this distribution shift problem.
AdaptOD effectively adapts a vanilla outlier distribution based on the outlier samples to the true OOD distribution.
arXiv Detail & Related papers (2024-10-28T07:54:29Z) - Representation Norm Amplification for Out-of-Distribution Detection in Long-Tail Learning [10.696635172502141]
We introduce our method, called textitRepresentation Norm Amplification (RNA), which solves the problem of detecting out-of-distribution samples.
Experiments show that RNA achieves superior performance in both OOD detection and classification compared to the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-20T09:27:07Z) - Out-of-Distribution Detection in Long-Tailed Recognition with Calibrated
Outlier Class Learning [24.6581764192229]
Existing out-of-distribution (OOD) methods have shown great success on balanced datasets.
OOD samples are often wrongly classified into head classes and/or tail-class samples are treated as OOD samples.
We introduce a novel outlier class learning (COCL) approach, in which 1) a debiased large margin learning method is introduced in the outlier class learning to distinguish OOD samples from both head and tail classes in the representation space and 2) an outlier-class-aware logit calibration method is defined to enhance the long-tailed classification confidence.
arXiv Detail & Related papers (2023-12-17T11:11:02Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Learnable Distribution Calibration for Few-Shot Class-Incremental
Learning [122.2241120474278]
Few-shot class-incremental learning (FSCIL) faces challenges of memorizing old class distributions and estimating new class distributions given few training samples.
We propose a learnable distribution calibration (LDC) approach, with the aim to systematically solve these two challenges using a unified framework.
arXiv Detail & Related papers (2022-10-01T09:40:26Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.