Out-of-Distribution Detection in Long-Tailed Recognition with Calibrated
Outlier Class Learning
- URL: http://arxiv.org/abs/2312.10686v2
- Date: Tue, 19 Dec 2023 07:49:07 GMT
- Title: Out-of-Distribution Detection in Long-Tailed Recognition with Calibrated
Outlier Class Learning
- Authors: Wenjun Miao, Guansong Pang, Tianqi Li, Xiao Bai, Jin Zheng
- Abstract summary: Existing out-of-distribution (OOD) methods have shown great success on balanced datasets.
OOD samples are often wrongly classified into head classes and/or tail-class samples are treated as OOD samples.
We introduce a novel outlier class learning (COCL) approach, in which 1) a debiased large margin learning method is introduced in the outlier class learning to distinguish OOD samples from both head and tail classes in the representation space and 2) an outlier-class-aware logit calibration method is defined to enhance the long-tailed classification confidence.
- Score: 24.6581764192229
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing out-of-distribution (OOD) methods have shown great success on
balanced datasets but become ineffective in long-tailed recognition (LTR)
scenarios where 1) OOD samples are often wrongly classified into head classes
and/or 2) tail-class samples are treated as OOD samples. To address these
issues, current studies fit a prior distribution of auxiliary/pseudo OOD data
to the long-tailed in-distribution (ID) data. However, it is difficult to
obtain such an accurate prior distribution given the unknowingness of real OOD
samples and heavy class imbalance in LTR. A straightforward solution to avoid
the requirement of this prior is to learn an outlier class to encapsulate the
OOD samples. The main challenge is then to tackle the aforementioned confusion
between OOD samples and head/tail-class samples when learning the outlier
class. To this end, we introduce a novel calibrated outlier class learning
(COCL) approach, in which 1) a debiased large margin learning method is
introduced in the outlier class learning to distinguish OOD samples from both
head and tail classes in the representation space and 2) an outlier-class-aware
logit calibration method is defined to enhance the long-tailed classification
confidence. Extensive empirical results on three popular benchmarks CIFAR10-LT,
CIFAR100-LT, and ImageNet-LT demonstrate that COCL substantially outperforms
state-of-the-art OOD detection methods in LTR while being able to improve the
classification accuracy on ID data. Code is available at
https://github.com/mala-lab/COCL.
Related papers
- CLIPScope: Enhancing Zero-Shot OOD Detection with Bayesian Scoring [16.0716584170549]
We introduce CLIPScope, a zero-shot OOD detection approach that normalizes the confidence score of a sample by class likelihoods.
CLIPScope incorporates a novel strategy to mine OOD classes from a large lexical database.
It selects class labels that are farthest and nearest to ID classes in terms of CLIP embedding distance to maximize coverage of OOD samples.
arXiv Detail & Related papers (2024-05-23T16:03:55Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - How Does Unlabeled Data Provably Help Out-of-Distribution Detection? [63.41681272937562]
Unlabeled in-the-wild data is non-trivial due to the heterogeneity of both in-distribution (ID) and out-of-distribution (OOD) data.
This paper introduces a new learning framework SAL (Separate And Learn) that offers both strong theoretical guarantees and empirical effectiveness.
arXiv Detail & Related papers (2024-02-05T20:36:33Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition [80.07843757970923]
We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
arXiv Detail & Related papers (2022-07-04T01:53:07Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - On The Consistency Training for Open-Set Semi-Supervised Learning [44.046578996049654]
We study how OOD samples affect training in both low- and high-dimensional spaces.
Our method makes better use of OOD samples and achieves state-of-the-art results.
arXiv Detail & Related papers (2021-01-19T12:38:17Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.