Long-Tailed Out-of-Distribution Detection with Refined Separate Class Learning
- URL: http://arxiv.org/abs/2509.17034v2
- Date: Thu, 25 Sep 2025 10:41:54 GMT
- Title: Long-Tailed Out-of-Distribution Detection with Refined Separate Class Learning
- Authors: Shuai Feng, Yuxin Ge, Yuntao Du, Mingcai Chen, Chongjun Wang, Lei Feng,
- Abstract summary: Out-of-distribution (OOD) detection is crucial for deploying robust machine learning models.<n>We propose a novel approach termed Refined Separate Class Learning (RSCL)<n>RSCL achieves superior OOD detection performance while improving the classification accuracy on in-distribution data.
- Score: 26.197517441186267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is crucial for deploying robust machine learning models. However, when training data follows a long-tailed distribution, the model's ability to accurately detect OOD samples is significantly compromised, due to the confusion between OOD samples and head/tail classes. To distinguish OOD samples from both head and tail classes, the separate class learning (SCL) approach has emerged as a promising solution, which separately conduct head-specific and tail-specific class learning. To this end, we examine the limitations of existing works of SCL and reveal that the OOD detection performance is notably influenced by the use of static scaling temperature value and the presence of uninformative outliers. To mitigate these limitations, we propose a novel approach termed Refined Separate Class Learning (RSCL), which leverages dynamic class-wise temperature adjustment to modulate the temperature parameter for each in-distribution class and informative outlier mining to identify diverse types of outliers based on their affinity with head and tail classes. Extensive experiments demonstrate that RSCL achieves superior OOD detection performance while improving the classification accuracy on in-distribution data.
Related papers
- DCAC: Dynamic Class-Aware Cache Creates Stronger Out-of-Distribution Detectors [43.8920190045364]
Out-of-distribution (OOD) detection remains a fundamental challenge for deep neural networks.<n>We propose DCAC (Dynamic Class-Aware Cache), a training-free, test-time calibration module that maintains separate caches for each ID class to collect high-entropy samples.
arXiv Detail & Related papers (2026-01-18T16:16:31Z) - Going Beyond Conventional OOD Detection [0.0]
Out-of-distribution (OOD) detection is critical to ensure the safe deployment of deep learning models in critical applications.<n>We present a unified Approach to Spurimatious, fine-grained, and Conventional OOD Detection (ASCOOD)<n>Our approach effectively mitigates the impact of spurious correlations and encourages capturing fine-grained attributes.
arXiv Detail & Related papers (2024-11-16T13:04:52Z) - Out-of-Distribution Detection in Long-Tailed Recognition with Calibrated
Outlier Class Learning [24.6581764192229]
Existing out-of-distribution (OOD) methods have shown great success on balanced datasets.
OOD samples are often wrongly classified into head classes and/or tail-class samples are treated as OOD samples.
We introduce a novel outlier class learning (COCL) approach, in which 1) a debiased large margin learning method is introduced in the outlier class learning to distinguish OOD samples from both head and tail classes in the representation space and 2) an outlier-class-aware logit calibration method is defined to enhance the long-tailed classification confidence.
arXiv Detail & Related papers (2023-12-17T11:11:02Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - How robust are pre-trained models to distribution shift? [82.08946007821184]
We show how spurious correlations affect the performance of popular self-supervised learning (SSL) and auto-encoder based models (AE)
We develop a novel evaluation scheme with the linear head trained on out-of-distribution (OOD) data, to isolate the performance of the pre-trained models from a potential bias of the linear head used for evaluation.
arXiv Detail & Related papers (2022-06-17T16:18:28Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Does Your Dermatology Classifier Know What It Doesn't Know? Detecting
the Long-Tail of Unseen Conditions [18.351120611713586]
We develop and rigorously evaluate a deep learning based system that can accurately classify skin conditions.
We frame this task as an out-of-distribution (OOD) detection problem.
Our novel approach, hierarchical outlier detection (HOD) assigns multiple abstention classes for each training class and jointly performs a coarse classification of inliers vs. outliers.
arXiv Detail & Related papers (2021-04-08T15:15:22Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Overcoming Classifier Imbalance for Long-tail Object Detection with
Balanced Group Softmax [88.11979569564427]
We provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution.
We propose a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training.
Extensive experiments on the very recent long-tail large vocabulary object recognition benchmark LVIS show that our proposed BAGS significantly improves the performance of detectors.
arXiv Detail & Related papers (2020-06-18T10:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.