Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning
- URL: http://arxiv.org/abs/2402.04835v2
- Date: Tue, 28 May 2024 09:31:48 GMT
- Title: Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning
- Authors: Darshana Saravanan, Naresh Manwani, Vineet Gandhi,
- Abstract summary: Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label)
NPLL relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem.
We present a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm.
- Score: 8.387189407144403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label), one of which is the true label. Noisy PLL (NPLL) relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem. Our work centres on NPLL and presents a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm. These pseudo-label and image pairs are then used to train a deep neural network classifier with label smoothing. The classifier's features and predictions are subsequently employed to refine and enhance the accuracy of pseudo-labels. We perform thorough experiments on seven datasets and compare against nine NPLL and PLL methods. We achieve state-of-the-art results in all studied settings from the prior literature, obtaining substantial gains in fine-grained classification and extreme noise scenarios. Further, we show the promising generalisation capability of our framework in realistic crowd-sourced datasets.
Related papers
- Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Adaptive Integration of Partial Label Learning and Negative Learning for
Enhanced Noisy Label Learning [23.847160480176697]
We propose a simple yet powerful idea called textbfNPN, which revolutionizes textbfNoisy label learning.
We generate reliable complementary labels using all non-candidate labels for NL to enhance model robustness through indirect supervision.
Experiments conducted on both synthetically corrupted and real-world noisy datasets demonstrate the superiority of NPN compared to other state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2023-12-15T03:06:19Z) - BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise
Learning [113.8799653759137]
We introduce a novel label noise type called BadLabel, which can significantly degrade the performance of existing LNL algorithms by a large margin.
BadLabel is crafted based on the label-flipping attack against standard classification.
We propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
arXiv Detail & Related papers (2023-05-28T06:26:23Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - Part-based Pseudo Label Refinement for Unsupervised Person
Re-identification [29.034390810078172]
Unsupervised person re-identification (re-ID) aims at learning discriminative representations for person retrieval from unlabeled data.
Recent techniques accomplish this task by using pseudo-labels, but these labels are inherently noisy and deteriorate the accuracy.
We propose a novel Part-based Pseudo Label Refinement (PPLR) framework that reduces the label noise by employing the complementary relationship between global and part features.
arXiv Detail & Related papers (2022-03-28T12:15:53Z) - PARS: Pseudo-Label Aware Robust Sample Selection for Learning with Noisy
Labels [5.758073912084364]
We propose PARS: Pseudo-Label Aware Robust Sample Selection.
PARS exploits all training samples using both the raw/noisy labels and estimated/refurbished pseudo-labels via self-training.
Results show that PARS significantly outperforms the state of the art on extensive studies on the noisy CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2022-01-26T09:31:55Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - Label Noise Types and Their Effects on Deep Learning [0.0]
In this work, we provide a detailed analysis of the effects of different kinds of label noise on learning.
We propose a generic framework to generate feature-dependent label noise, which we show to be the most challenging case for learning.
For the ease of other researchers to test their algorithms with noisy labels, we share corrupted labels for the most commonly used benchmark datasets.
arXiv Detail & Related papers (2020-03-23T18:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.