ALIM: Adjusting Label Importance Mechanism for Noisy Partial Label
Learning
- URL: http://arxiv.org/abs/2301.12077v2
- Date: Thu, 18 May 2023 02:07:54 GMT
- Title: ALIM: Adjusting Label Importance Mechanism for Noisy Partial Label
Learning
- Authors: Mingyu Xu, Zheng Lian, Lei Feng, Bin Liu, Jianhua Tao
- Abstract summary: Noisy partial label learning is an important branch of weakly supervised learning.
Most of the existing works attempt to detect noisy samples and estimate the ground-truth label for each noisy sample.
We propose a novel framework for noisy with theoretical guarantees, called Adjusting Labeling Mechanism (ALIM)''
It aims to reduce the negative impact of detection errors by trading off the initial candidate set and model outputs.
- Score: 46.53885746394252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Noisy partial label learning (noisy PLL) is an important branch of weakly
supervised learning. Unlike PLL where the ground-truth label must conceal in
the candidate label set, noisy PLL relaxes this constraint and allows the
ground-truth label may not be in the candidate label set. To address this
challenging problem, most of the existing works attempt to detect noisy samples
and estimate the ground-truth label for each noisy sample. However, detection
errors are unavoidable. These errors can accumulate during training and
continuously affect model optimization. To this end, we propose a novel
framework for noisy PLL with theoretical guarantees, called ``Adjusting Label
Importance Mechanism (ALIM)''. It aims to reduce the negative impact of
detection errors by trading off the initial candidate set and model outputs.
ALIM is a plug-in strategy that can be integrated with existing PLL approaches.
Experimental results on benchmark datasets demonstrate that our method can
achieve state-of-the-art performance on noisy PLL.
\textcolor[rgb]{0.93,0.0,0.47}{Our code can be found in Supplementary
Material}.
Related papers
- Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning [8.387189407144403]
Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label)
NPLL relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem.
We present a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm.
arXiv Detail & Related papers (2024-02-07T13:32:47Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - FedNoisy: Federated Noisy Label Learning Benchmark [53.73816587601204]
Federated learning has gained popularity for distributed learning without aggregating sensitive data from clients.
The distributed and isolated nature of data isolation may be complicated by data quality, making it more vulnerable to noisy labels.
We serve the first standardized benchmark that can help researchers fully explore potential federated noisy settings.
arXiv Detail & Related papers (2023-06-20T16:18:14Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - ARNet: Automatic Refinement Network for Noisy Partial Label Learning [41.577081851679765]
We propose a novel framework called "Automatic Refinement Network (ARNet)"
Our method consists of multiple rounds. In each round, we purify the noisy samples through two key modules, i.e., noisy sample detection and label correction.
We prove that our method is able to reduce the noise level of the dataset and eventually approximate the Bayes optimal.
arXiv Detail & Related papers (2022-11-09T10:01:25Z) - Label Noise-Robust Learning using a Confidence-Based Sieving Strategy [15.997774467236352]
In learning tasks with label noise, improving model robustness against overfitting is a pivotal challenge.
Identifying the samples with noisy labels and preventing the model from learning them is a promising approach to address this challenge.
We propose a novel discriminator metric called confidence error and a sieving strategy called CONFES to differentiate between the clean and noisy samples effectively.
arXiv Detail & Related papers (2022-10-11T10:47:28Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.