Mitigating Label Noise using Prompt-Based Hyperbolic Meta-Learning in Open-Set Domain Generalization
- URL: http://arxiv.org/abs/2412.18342v1
- Date: Tue, 24 Dec 2024 11:00:23 GMT
- Title: Mitigating Label Noise using Prompt-Based Hyperbolic Meta-Learning in Open-Set Domain Generalization
- Authors: Kunyu Peng, Di Wen, Sarfraz M. Saquib, Yufan Chen, Junwei Zheng, David Schneider, Kailun Yang, Jiamin Wu, Alina Roitberg, Rainer Stiefelhagen,
- Abstract summary: Open-Set Domain Generalization is a challenging task requiring models to accurately predict familiar categories.<n>Label noise can mislead model optimization, exacerbating the challenges of open-set recognition in novel domains.<n>We propose HyProMeta, a framework that integrates hyperbolic category prototypes for label noise-aware meta-learning.
- Score: 40.7795100916718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-Set Domain Generalization (OSDG) is a challenging task requiring models to accurately predict familiar categories while minimizing confidence for unknown categories to effectively reject them in unseen domains. While the OSDG field has seen considerable advancements, the impact of label noise--a common issue in real-world datasets--has been largely overlooked. Label noise can mislead model optimization, thereby exacerbating the challenges of open-set recognition in novel domains. In this study, we take the first step towards addressing Open-Set Domain Generalization under Noisy Labels (OSDG-NL) by constructing dedicated benchmarks derived from widely used OSDG datasets, including PACS and DigitsDG. We evaluate baseline approaches by integrating techniques from both label denoising and OSDG methodologies, highlighting the limitations of existing strategies in handling label noise effectively. To address these limitations, we propose HyProMeta, a novel framework that integrates hyperbolic category prototypes for label noise-aware meta-learning alongside a learnable new-category agnostic prompt designed to enhance generalization to unseen classes. Our extensive experiments demonstrate the superior performance of HyProMeta compared to state-of-the-art methods across the newly established benchmarks. The source code of this work is released at https://github.com/KPeng9510/HyProMeta.
Related papers
- Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization [19.405975017917957]
Multi-source Domain Generalization (DG) aims to improve model robustness to new distributions.
However, DG methods often overlook the effect of label noise, which can confuse a model during training, reducing performance.
In this paper, we investigate this underexplored space, where models are evaluated under both distribution shifts and label noise.
Our proposed DL4ND approach improves noise detection by taking advantage of the observation that noisy samples that may appear indistinguishable within a single domain often show greater variation when compared across domains.
arXiv Detail & Related papers (2025-04-03T19:37:57Z) - Feature Modulation for Semi-Supervised Domain Generalization without Domain Labels [1.2461397728727208]
Semi-supervised domain generalization (SSDG) leverages a small fraction of labeled data alongside unlabeled data to enhance model generalization.
Most of the existing SSDG methods rely on pseudo-labeling (PL) for unlabeled data, often assuming access to domain labels-a privilege not always available.
We tackle the more challenging domain-label agnostic SSDG, where domain labels for unlabeled data are not available during training.
We propose a feature modulation strategy that enhances class-discriminative features while suppressing domain-specific information.
arXiv Detail & Related papers (2025-03-26T18:10:10Z) - OSLoPrompt: Bridging Low-Supervision Challenges and Open-Set Domain Generalization in CLIP [15.780915391081734]
Low-Shot Open-Set Domain Generalization (LSOSDG) is a novel paradigm unifying low-shot learning with open-set domain generalization (ODG)
We propose OSLOPROMPT, an advanced prompt-learning framework for CLIP with two core innovations.
arXiv Detail & Related papers (2025-03-20T12:51:19Z) - CAT: Class Aware Adaptive Thresholding for Semi-Supervised Domain Generalization [0.989976359821412]
Domain Generalization seeks to transfer knowledge from source domains to unseen target domains, even in the presence of domain shifts.<n>We propose a novel method, CAT, which leverages semi-supervised learning with limited labeled data to achieve competitive generalization performance under domain shifts.<n>Our approach uses flexible thresholding to generate high-quality pseudo-labels with higher class diversity while refining noisy pseudo-labels to improve their reliability.
arXiv Detail & Related papers (2024-12-11T15:47:01Z) - Robustness to Subpopulation Shift with Domain Label Noise via Regularized Annotation of Domains [27.524318404316013]
We introduce Regularized reliant of Domains (RAD) in order to train robust last layer classifiers without the need for explicit domain annotations.
RAD outperforms state-of-the-art annotation-reliant methods even with only 5% noise in the training data for several publicly available datasets.
arXiv Detail & Related papers (2024-02-16T19:35:42Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - DOST -- Domain Obedient Self-supervised Training for Multi Label
Classification with Noisy Labels [27.696103256353254]
This paper studies the effect of label noise on domain rule violation incidents in the multi-label classification task.
We propose the Domain Obedient Self-supervised Training (DOST) paradigm which makes deep learning models more aligned to domain rules.
arXiv Detail & Related papers (2023-08-09T17:53:36Z) - Feature Noise Boosts DNN Generalization under Label Noise [65.36889005555669]
The presence of label noise in the training data has a profound impact on the generalization of deep neural networks (DNNs)
In this study, we introduce and theoretically demonstrate a simple feature noise method, which directly adds noise to the features of training data.
arXiv Detail & Related papers (2023-08-03T08:31:31Z) - Semi-Supervised Domain Generalization with Stochastic StyleMatch [90.98288822165482]
In real-world applications, we might have only a few labels available from each source domain due to high annotation cost.
In this work, we investigate semi-supervised domain generalization, a more realistic and practical setting.
Our proposed approach, StyleMatch, is inspired by FixMatch, a state-of-the-art semi-supervised learning method based on pseudo-labeling.
arXiv Detail & Related papers (2021-06-01T16:00:08Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.