Learning with Noisy Labels: Interconnection of Two
Expectation-Maximizations
- URL: http://arxiv.org/abs/2401.04390v1
- Date: Tue, 9 Jan 2024 07:22:30 GMT
- Title: Learning with Noisy Labels: Interconnection of Two
Expectation-Maximizations
- Authors: Heewon Kim, Hyun Sung Chang, Kiho Cho, Jaeyun Lee, Bohyung Han
- Abstract summary: Labor-intensive labeling becomes a bottleneck in developing computer vision algorithms based on deep learning.
We address learning with noisy labels (LNL) problem, which is formalized as a task of finding a structured manifold in the midst of noisy data.
Our algorithm achieves state-of-the-art performance in multiple standard benchmarks with substantial margins under various types of label noise.
- Score: 41.65589788264123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Labor-intensive labeling becomes a bottleneck in developing computer vision
algorithms based on deep learning. For this reason, dealing with imperfect
labels has increasingly gained attention and has become an active field of
study. We address learning with noisy labels (LNL) problem, which is formalized
as a task of finding a structured manifold in the midst of noisy data. In this
framework, we provide a proper objective function and an optimization algorithm
based on two expectation-maximization (EM) cycles. The separate networks
associated with the two EM cycles collaborate to optimize the objective
function, where one model is for distinguishing clean labels from corrupted
ones while the other is for refurbishing the corrupted labels. This approach
results in a non-collapsing LNL-flywheel model in the end. Experiments show
that our algorithm achieves state-of-the-art performance in multiple standard
benchmarks with substantial margins under various types of label noise.
Related papers
- Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning [8.387189407144403]
Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label)
NPLL relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem.
We present a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm.
arXiv Detail & Related papers (2024-02-07T13:32:47Z) - Multiclass Learning from Noisy Labels for Non-decomposable Performance Measures [15.358504449550013]
We design algorithms to learn from noisy labels for two broad classes of non-decomposable performance measures.
In both cases, we develop noise-corrected versions of the algorithms under the widely studied class-conditional noise models.
Our experiments demonstrate the effectiveness of our algorithms in handling label noise.
arXiv Detail & Related papers (2024-02-01T23:03:53Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Instance-dependent Label-noise Learning under a Structural Causal Model [92.76400590283448]
Label noise will degenerate the performance of deep learning algorithms.
By leveraging a structural causal model, we propose a novel generative approach for instance-dependent label-noise learning.
arXiv Detail & Related papers (2021-09-07T10:42:54Z) - An Ensemble Noise-Robust K-fold Cross-Validation Selection Method for
Noisy Labels [0.9699640804685629]
Large-scale datasets tend to contain mislabeled samples that can be memorized by deep neural networks (DNNs)
We present Ensemble Noise-robust K-fold Cross-Validation Selection (E-NKCVS) to effectively select clean samples from noisy data.
We evaluate our approach on various image and text classification tasks where the labels have been manually corrupted with different noise ratios.
arXiv Detail & Related papers (2021-07-06T02:14:52Z) - Co-Seg: An Image Segmentation Framework Against Label Corruption [8.219887855003648]
Supervised deep learning performance is heavily tied to the availability of high-quality labels for training.
We propose a novel framework, namely Co-Seg, to collaboratively train segmentation networks on datasets which include low-quality noisy labels.
Our framework can be easily implemented in any segmentation algorithm to increase its robustness to noisy labels.
arXiv Detail & Related papers (2021-01-31T20:01:40Z) - Analysis of label noise in graph-based semi-supervised learning [2.4366811507669124]
In machine learning, one must acquire labels to help supervise a model that will be able to generalize to unseen data.
It is often the case that most of our data is unlabeled.
Semi-supervised learning (SSL) alleviates that by making strong assumptions about the relation between the labels and the input data distribution.
arXiv Detail & Related papers (2020-09-27T22:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.