Partial Label Supervision for Agnostic Generative Noisy Label Learning
- URL: http://arxiv.org/abs/2308.01184v2
- Date: Wed, 28 Feb 2024 16:09:24 GMT
- Title: Partial Label Supervision for Agnostic Generative Noisy Label Learning
- Authors: Fengbei Liu, Chong Wang, Yuanhong Chen, Yuyuan Liu, Gustavo Carneiro
- Abstract summary: Noisy label learning has been tackled with both discriminative and generative approaches.
We propose a novel framework for generative noisy label learning that addresses these challenges.
- Score: 18.29334728940232
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Noisy label learning has been tackled with both discriminative and generative
approaches. Despite the simplicity and efficiency of discriminative methods,
generative models offer a more principled way of disentangling clean and noisy
labels and estimating the label transition matrix. However, existing generative
methods often require inferring additional latent variables through costly
generative modules or heuristic assumptions, which hinder adaptive optimisation
for different causal directions. They also assume a uniform clean label prior,
which does not reflect the sample-wise clean label distribution and
uncertainty. In this paper, we propose a novel framework for generative noisy
label learning that addresses these challenges. First, we propose a new
single-stage optimisation that directly approximates image generation by a
discriminative classifier output. This approximation significantly reduces the
computation cost of image generation, preserves the generative modelling
benefits, and enables our framework to be agnostic in regards to different
causality scenarios (i.e., image generate label or vice-versa). Second, we
introduce a new Partial Label Supervision (PLS) for noisy label learning that
accounts for both clean label coverage and uncertainty. The supervision of PLS
does not merely aim at minimising loss, but seeks to capture the underlying
sample-wise clean label distribution and uncertainty. Extensive experiments on
computer vision and natural language processing (NLP) benchmarks demonstrate
that our generative modelling achieves state-of-the-art results while
significantly reducing the computation cost. Our code is available at
https://github.com/lfb-1/GNL.
Related papers
- Reduction-based Pseudo-label Generation for Instance-dependent Partial Label Learning [41.345794038968776]
We propose to leverage reduction-based pseudo-labels to alleviate the influence of incorrect candidate labels.
We show that reduction-based pseudo-labels exhibit greater consistency with the Bayes optimal classifier compared to pseudo-labels directly generated from the predictive model.
arXiv Detail & Related papers (2024-10-28T07:32:20Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise
Learning [113.8799653759137]
We introduce a novel label noise type called BadLabel, which can significantly degrade the performance of existing LNL algorithms by a large margin.
BadLabel is crafted based on the label-flipping attack against standard classification.
We propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
arXiv Detail & Related papers (2023-05-28T06:26:23Z) - Towards the Identifiability in Noisy Label Learning: A Multinomial
Mixture Approach [37.32107678838193]
Learning from noisy labels (LNL) plays a crucial role in deep learning.
The most promising LNL methods rely on identifying clean-label samples from a dataset with noisy annotations.
We propose a method that automatically generates additional noisy labels by estimating the noisy label distribution based on nearest neighbours.
arXiv Detail & Related papers (2023-01-04T01:54:33Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - Instance-dependent Label-noise Learning under a Structural Causal Model [92.76400590283448]
Label noise will degenerate the performance of deep learning algorithms.
By leveraging a structural causal model, we propose a novel generative approach for instance-dependent label-noise learning.
arXiv Detail & Related papers (2021-09-07T10:42:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.