PLOOD: Partial Label Learning with Out-of-distribution Objects
- URL: http://arxiv.org/abs/2403.06681v4
- Date: Wed, 12 Mar 2025 05:54:38 GMT
- Title: PLOOD: Partial Label Learning with Out-of-distribution Objects
- Authors: Jintao Huang, Yiu-Ming Cheung, Chi-Man Vong,
- Abstract summary: Existing Partial Label Learning (PLL) methods posit that training and test data adhere to the same distribution.<n>We introduce theDPLL paradigm to tackle this significant yet underexplored issue.<n>And our newly proposed PLOOD framework enables simulating OOD objects through Positive-Negative Sample (PNSA) feature learning and Partial Energy (PE)-based label refinement.
- Score: 37.23754625256131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing Partial Label Learning (PLL) methods posit that training and test data adhere to the same distribution, a premise that frequently does not hold in practical application where Out-of-Distribution (OOD) objects are present. We introduce the OODPLL paradigm to tackle this significant yet underexplored issue. And our newly proposed PLOOD framework enables PLL to tackle OOD objects through Positive-Negative Sample Augmented (PNSA) feature learning and Partial Energy (PE)-based label refinement. The PNSA module enhances feature discrimination and OOD recognition by simulating in- and out-of-distribution instances, which employ structured positive and negative sample augmentation, in contrast to conventional PLL methods struggling to distinguish OOD samples. The PE scoring mechanism combines label confidence with energy-based uncertainty estimation, thereby reducing the impact of imprecise supervision and effectively achieving label disambiguation. Experimental results on CIFAR-10 and CIFAR-100, alongside various OOD datasets, demonstrate that conventional PLL methods exhibit substantial degradation in OOD scenarios, underscoring the necessity of incorporating OOD considerations in PLL approaches. Ablation studies show that PNSA feature learning and PE-based label refinement are necessary for PLOOD to work, offering a robust solution for open-set PLL problems.
Related papers
- Leveraging Perturbation Robustness to Enhance Out-of-Distribution Detection [15.184096796229115]
We propose a post-hoc method, Perturbation-Rectified OOD detection (PRO), based on the insight that prediction confidence for OOD inputs is more susceptible to reduction under perturbation than in-distribution (IND) inputs.
On a CIFAR-10 model with adversarial training, PRO effectively detects near-OOD inputs, achieving a reduction of more than 10% on FPR@95 compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-03-24T15:32:33Z) - COOD: Concept-based Zero-shot OOD Detection [12.361461338978732]
We introduce COOD, a novel zero-shot multi-label OOD detection framework.
By enriching the semantic space with both positive and negative concepts for each label, our approach models complex label dependencies.
Our method significantly outperforms existing approaches, achieving approximately 95% average AUROC on both VOC and datasets.
arXiv Detail & Related papers (2024-11-15T08:15:48Z) - Out-of-Distribution Learning with Human Feedback [26.398598663165636]
This paper presents a novel framework for OOD learning with human feedback.
Our framework capitalizes on the freely available unlabeled data in the wild.
By exploiting human feedback, we enhance the robustness and reliability of machine learning models.
arXiv Detail & Related papers (2024-08-14T18:49:27Z) - Enhancing OOD Detection Using Latent Diffusion [5.093257685701887]
Out-of-Distribution (OOD) detection algorithms have been developed to identify unknown samples or objects in real-world deployments.
We propose an Outlier Aware Learning framework, which synthesizes OOD training data in the latent space.
We also develop a knowledge distillation module to prevent the degradation of ID classification accuracy when training with OOD data.
arXiv Detail & Related papers (2024-06-24T11:01:43Z) - Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection [71.93411099797308]
Out-of-distribution (OOD) samples are crucial when deploying machine learning models in open-world scenarios.
We propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to potential Outlier Exposure, termed EOE.
EOE can be generalized to different tasks, including far, near, and fine-language OOD detection.
EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-06-02T17:09:48Z) - How Does Unlabeled Data Provably Help Out-of-Distribution Detection? [63.41681272937562]
Unlabeled in-the-wild data is non-trivial due to the heterogeneity of both in-distribution (ID) and out-of-distribution (OOD) data.
This paper introduces a new learning framework SAL (Separate And Learn) that offers both strong theoretical guarantees and empirical effectiveness.
arXiv Detail & Related papers (2024-02-05T20:36:33Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Distilling the Unknown to Unveil Certainty [66.29929319664167]
Out-of-distribution (OOD) detection is critical for identifying test samples that deviate from in-distribution (ID) data, ensuring network robustness and reliability.<n>This paper presents a flexible framework for OOD knowledge distillation that extracts OOD-sensitive information from a network to develop a binary classifier capable of distinguishing between ID and OOD samples.
arXiv Detail & Related papers (2023-11-14T08:05:02Z) - Class Relevance Learning For Out-of-distribution Detection [16.029229052068]
This paper presents an innovative class relevance learning method tailored for OOD detection.
Our method establishes a comprehensive class relevance learning framework, strategically harnessing interclass relationships within the OOD pipeline.
arXiv Detail & Related papers (2023-09-21T08:38:21Z) - How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models? [29.75562085178755]
We study how fine-tuning impact OOD detection for few-shot downstream tasks.
Our results suggest that a proper choice of OOD scores is essential for CLIP-based fine-tuning.
We also show that prompt learning demonstrates the state-of-the-art OOD detection performance over the zero-shot counterpart.
arXiv Detail & Related papers (2023-06-09T17:16:50Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z) - On the Impact of Spurious Correlation for Out-of-distribution Detection [14.186776881154127]
We present a new formalization and model the data shifts by taking into account both the invariant and environmental features.
Our results suggest that the detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set.
arXiv Detail & Related papers (2021-09-12T23:58:17Z) - MOOD: Multi-level Out-of-distribution Detection [13.207044902083057]
Out-of-distribution (OOD) detection is essential to prevent anomalous inputs from causing a model to fail during deployment.
We propose a novel framework, multi-level out-of-distribution detection MOOD, which exploits intermediate classifier outputs for dynamic and efficient OOD inference.
MOOD achieves up to 71.05% computational reduction in inference, while maintaining competitive OOD detection performance.
arXiv Detail & Related papers (2021-04-30T02:18:31Z) - Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection [72.35532598131176]
We propose an unsupervised method to detect OOD samples using a $k$-NN density estimate.
We leverage a recent insight about label smoothing, which we call the emphLabel Smoothed Embedding Hypothesis
We show that our proposal outperforms many OOD baselines and also provide new finite-sample high-probability statistical results.
arXiv Detail & Related papers (2021-02-09T21:04:44Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.