EDGE: Unknown-aware Multi-label Learning by Energy Distribution Gap Expansion
- URL: http://arxiv.org/abs/2412.07499v2
- Date: Mon, 23 Dec 2024 15:34:09 GMT
- Title: EDGE: Unknown-aware Multi-label Learning by Energy Distribution Gap Expansion
- Authors: Yuchen Sun, Qianqian Xu, Zitai Wang, Zhiyong Yang, Junwei He,
- Abstract summary: Multi-label Out-Of-Distribution (OOD) detection aims to discriminate the OOD samples from the multi-label In-Distribution (ID) ones.
JointEnergy is a representative multi-label OOD inference criterion.
We propose an unknown-aware multi-label learning framework to reshape the uncertainty energy space layout.
- Score: 47.0234440617797
- License:
- Abstract: Multi-label Out-Of-Distribution (OOD) detection aims to discriminate the OOD samples from the multi-label In-Distribution (ID) ones. Compared with its multiclass counterpart, it is crucial to model the joint information among classes. To this end, JointEnergy, which is a representative multi-label OOD inference criterion, summarizes the logits of all the classes. However, we find that JointEnergy can produce an imbalance problem in OOD detection, especially when the model lacks enough discrimination ability. Specifically, we find that the samples only related to minority classes tend to be classified as OOD samples due to the ambiguous energy decision boundary. Besides, imbalanced multi-label learning methods, originally designed for ID ones, would not be suitable for OOD detection scenarios, even producing a serious negative transfer effect. In this paper, we resort to auxiliary outlier exposure (OE) and propose an unknown-aware multi-label learning framework to reshape the uncertainty energy space layout. In this framework, the energy score is separately optimized for tail ID samples and unknown samples, and the energy distribution gap between them is expanded, such that the tail ID samples can have a significantly larger energy score than the OOD ones. What's more, a simple yet effective measure is designed to select more informative OE datasets. Finally, comprehensive experimental results on multiple multi-label and OOD datasets reveal the effectiveness of the proposed method.
Related papers
- COOD: Concept-based Zero-shot OOD Detection [12.361461338978732]
We introduce COOD, a novel zero-shot multi-label OOD detection framework.
By enriching the semantic space with both positive and negative concepts for each label, our approach models complex label dependencies.
Our method significantly outperforms existing approaches, achieving approximately 95% average AUROC on both VOC and datasets.
arXiv Detail & Related papers (2024-11-15T08:15:48Z) - Scalable Ensemble Diversification for OOD Generalization and Detection [68.8982448081223]
SED identifies hard training samples on the fly and encourages the ensemble members to disagree on these.
We show how to avoid the expensive computations in existing methods of exhaustive pairwise disagreements across models.
For OOD generalization, we observe large benefits from the diversification in multiple settings including output-space (classical) ensembles and weight-space ensembles (model soups)
arXiv Detail & Related papers (2024-09-25T10:30:24Z) - Margin-bounded Confidence Scores for Out-of-Distribution Detection [2.373572816573706]
We propose a novel method called Margin bounded Confidence Scores (MaCS) to address the nontrivial OOD detection problem.
MaCS enlarges the disparity between ID and OOD scores, which in turn makes the decision boundary more compact.
Experiments on various benchmark datasets for image classification tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-09-22T05:40:25Z) - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection [71.93411099797308]
Out-of-distribution (OOD) samples are crucial when deploying machine learning models in open-world scenarios.
We propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to potential Outlier Exposure, termed EOE.
EOE can be generalized to different tasks, including far, near, and fine-language OOD detection.
EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-06-02T17:09:48Z) - Multi-Label Out-of-Distribution Detection with Spectral Normalized Joint Energy [14.149428145967939]
We introduce Spectral Normalized Joint Energy (SNoJoE), a method that consolidates label-specific information across multiple labels.
Our findings indicate that the application of spectral normalization to joint energy scores notably amplifies the model's capability for OOD detection.
arXiv Detail & Related papers (2024-05-08T02:05:38Z) - Out-of-Distribution Detection Using Peer-Class Generated by Large Language Model [0.0]
Out-of-distribution (OOD) detection is a critical task to ensure the reliability and security of machine learning models.
In this paper, a novel method called ODPC is proposed, in which specific prompts to generate OOD peer classes of ID semantics are designed by a large language model.
Experiments on five benchmark datasets show that the method we propose can yield state-of-the-art results.
arXiv Detail & Related papers (2024-03-20T06:04:05Z) - Diversified Outlier Exposure for Out-of-Distribution Detection via
Informative Extrapolation [110.34982764201689]
Out-of-distribution (OOD) detection is important for deploying reliable machine learning models on real-world applications.
Recent advances in outlier exposure have shown promising results on OOD detection via fine-tuning model with informatively sampled auxiliary outliers.
We propose a novel framework, namely, Diversified Outlier Exposure (DivOE), for effective OOD detection via informative extrapolation based on the given auxiliary outliers.
arXiv Detail & Related papers (2023-10-21T07:16:09Z) - Exploiting Mixed Unlabeled Data for Detecting Samples of Seen and Unseen
Out-of-Distribution Classes [5.623232537411766]
Out-of-Distribution (OOD) detection is essential in real-world applications, which has attracted increasing attention in recent years.
Most existing OOD detection methods require many labeled In-Distribution (ID) data, causing a heavy labeling cost.
In this paper, we focus on the more realistic scenario, where limited labeled data and abundant unlabeled data are available.
We propose the Adaptive In-Out-aware Learning (AIOL) method, in which we adaptively select potential ID and OOD samples from the mixed unlabeled data.
arXiv Detail & Related papers (2022-10-13T08:34:25Z) - Do Deep Neural Networks Always Perform Better When Eating More Data? [82.6459747000664]
We design experiments from Identically Independent Distribution(IID) and Out of Distribution(OOD)
Under IID condition, the amount of information determines the effectivity of each sample, the contribution of samples and difference between classes determine the amount of class information.
Under OOD condition, the cross-domain degree of samples determine the contributions, and the bias-fitting caused by irrelevant elements is a significant factor of cross-domain.
arXiv Detail & Related papers (2022-05-30T15:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.