RankOOD - Class Ranking-based Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2511.19996v1
- Date: Tue, 25 Nov 2025 07:02:56 GMT
- Title: RankOOD - Class Ranking-based Out-of-Distribution Detection
- Authors: Dishanika Denipitiyage, Naveen Karunanayake, Suranga Seneviratne, Sanjay Chawla,
- Abstract summary: We propose a rank-based Out-of-Distribution (OOD) detection approach based on training a model with the Placket-Luce loss.<n>Our approach is based on the insight that with a deep learning model trained using the Cross Entropy Loss, in-distribution (ID) class prediction induces a ranking pattern for each ID class prediction.
- Score: 5.447909365133452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose RankOOD, a rank-based Out-of-Distribution (OOD) detection approach based on training a model with the Placket-Luce loss, which is now extensively used for preference alignment tasks in foundational models. Our approach is based on the insight that with a deep learning model trained using the Cross Entropy Loss, in-distribution (ID) class prediction induces a ranking pattern for each ID class prediction. The RankOOD framework formalizes the insight by first extracting a rank list for each class using an initial classifier and then uses another round of training with the Plackett-Luce loss, where the class rank, a fixed permutation for each class, is the predicted variable. An OOD example may get assigned with high probability to an ID example, but the probability of it respecting the ranking classification is likely to be small. RankOOD, achieves SOTA performance on the near-ODD TinyImageNet evaluation benchmark, reducing FPR95 by 4.3%.
Related papers
- ProHOC: Probabilistic Hierarchical Out-of-Distribution Classification via Multi-Depth Networks [10.894582817549042]
Out-of-distribution (OOD) detection in deep learning has traditionally been framed as a binary task.<n>We propose a framework for detecting and classifying OOD samples in a given class hierarchy.
arXiv Detail & Related papers (2025-03-27T11:39:55Z) - Image-Caption Encoding for Improving Zero-Shot Generalization [12.906307770270026]
We show that when an OOD data point is misclassified, the correct class can be typically found in the Top-K predicted classes.
In order to steer the model prediction toward the correct class within the top predicted classes, we propose the Image-Caption (ICE) method.
Our method can be easily combined with other SOTA methods to enhance Top-1 OOD accuracies by 0.5% on average and up to 3% on challenging datasets.
arXiv Detail & Related papers (2024-02-05T01:14:07Z) - RankFeat&RankWeight: Rank-1 Feature/Weight Removal for Out-of-distribution Detection [66.27699658243391]
textttRankFeat is a simple yet effective emphpost hoc approach for OOD detection.<n>textttRankWeight is also emphpost hoc and only requires computing the rank-1 matrix once.<n>textttRankFeat achieves emphstate-of-the-art performance and reduces the average false positive rate (FPR95) by 17.90%.
arXiv Detail & Related papers (2023-11-23T12:17:45Z) - Unified Classification and Rejection: A One-versus-All Framework [47.58109235690227]
We build a unified framework for building open set classifiers for both classification and OOD rejection.
By decomposing the $ K $-class problem into $ K $ one-versus-all (OVA) binary classification tasks, we show that combining the scores of OVA classifiers can give $ (K+1) $-class posterior probabilities.
Experiments on popular OSR and OOD detection datasets demonstrate that the proposed framework, using a single multi-class classifier, yields competitive performance.
arXiv Detail & Related papers (2023-11-22T12:47:12Z) - RankDNN: Learning to Rank for Few-shot Learning [70.49494297554537]
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification.
It provides a new perspective on few-shot learning and is complementary to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-28T13:59:31Z) - Partial and Asymmetric Contrastive Learning for Out-of-Distribution
Detection in Long-Tailed Recognition [80.07843757970923]
We show that existing OOD detection methods suffer from significant performance degradation when the training set is long-tail distributed.
We propose Partial and Asymmetric Supervised Contrastive Learning (PASCL), which explicitly encourages the model to distinguish between tail-class in-distribution samples and OOD samples.
Our method outperforms previous state-of-the-art method by $1.29%$, $1.45%$, $0.69%$ anomaly detection false positive rate (FPR) and $3.24%$, $4.06%$, $7.89%$ in-distribution
arXiv Detail & Related papers (2022-07-04T01:53:07Z) - P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking
Fine-tuning with Prompt-based Learning and Pre-finetuning [38.60274348013499]
We identify and study the two mismatches between pre-training and ranking fine-tuning.
To mitigate these gaps, we propose Pre-trained, Prompt-learned and Pre-finetuned Neural Ranker (P3 Ranker)
Experiments on MS MARCO and Robust04 show the superior performances of P3 Ranker in few-shot ranking.
arXiv Detail & Related papers (2022-05-04T04:23:29Z) - A Top-down Supervised Learning Approach to Hierarchical Multi-label
Classification in Networks [0.21485350418225244]
This paper presents a general prediction model to hierarchical multi-label classification (HMC), where the attributes to be inferred can be specified as a strict poset.
It is based on a top-down classification approach that addresses hierarchical multi-label classification with supervised learning by building a local classifier per class.
The proposed model is showcased with a case study on the prediction of gene functions for Oryza sativa Japonica, a variety of rice.
arXiv Detail & Related papers (2022-03-23T17:29:17Z) - CIM: Class-Irrelevant Mapping for Few-Shot Classification [58.02773394658623]
Few-shot classification (FSC) is one of the most concerned hot issues in recent years.
How to appraise the pre-trained FEM is the most crucial focus in the FSC community.
We propose a simple, flexible method, dubbed as Class-Irrelevant Mapping (CIM)
arXiv Detail & Related papers (2021-09-07T03:26:24Z) - Monocular Depth Estimation via Listwise Ranking using the Plackett-Luce
Model [15.472533971305367]
In many real-world applications, the relative depth of objects in an image is crucial for scene understanding.
Recent approaches mainly tackle the problem of depth prediction in monocular images by treating the problem as a regression task.
Yet, ranking methods suggest themselves as a natural alternative to regression, and indeed, ranking approaches leveraging pairwise comparisons have shown promising performance on this problem.
arXiv Detail & Related papers (2020-10-25T13:40:10Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Document Ranking with a Pretrained Sequence-to-Sequence Model [56.44269917346376]
We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words"
Our approach significantly outperforms an encoder-only model in a data-poor regime.
arXiv Detail & Related papers (2020-03-14T22:29:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.