Combating noisy labels by agreement: A joint training method with
co-regularization
- URL: http://arxiv.org/abs/2003.02752v3
- Date: Wed, 22 Apr 2020 17:06:32 GMT
- Title: Combating noisy labels by agreement: A joint training method with
co-regularization
- Authors: Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An
- Abstract summary: We propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training.
We show that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.
- Score: 27.578738673827658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning with noisy labels is a practically challenging problem in
weakly supervised learning. The state-of-the-art approaches "Decoupling" and
"Co-teaching+" claim that the "disagreement" strategy is crucial for
alleviating the problem of learning with noisy labels. In this paper, we start
from a different perspective and propose a robust learning paradigm called
JoCoR, which aims to reduce the diversity of two networks during training.
Specifically, we first use two networks to make predictions on the same
mini-batch data and calculate a joint loss with Co-Regularization for each
training example. Then we select small-loss examples to update the parameters
of both two networks simultaneously. Trained by the joint loss, these two
networks would be more and more similar due to the effect of Co-Regularization.
Extensive experimental results on corrupted data from benchmark datasets
including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is
superior to many state-of-the-art approaches for learning with noisy labels.
Related papers
- AsyCo: An Asymmetric Dual-task Co-training Model for Partial-label Learning [53.97072488455662]
Self-training models achieve state-of-the-art performance but suffer from error accumulation problem caused by mistakenly disambiguated instances.
We propose an asymmetric dual-task co-training model called AsyCo, which forces its two networks, i.e., a disambiguation network and an auxiliary network, to learn from different views explicitly.
Experiments on both uniform and instance-dependent partially labeled datasets demonstrate the effectiveness of AsyCo.
arXiv Detail & Related papers (2024-07-21T02:08:51Z) - JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - CrossSplit: Mitigating Label Noise Memorization through Data Splitting [25.344386272010397]
We propose a novel training procedure to mitigate the memorization of noisy labels, called CrossSplit.
Experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and mini-WebVision datasets demonstrate that our method can outperform the current state-of-the-art in a wide range of noise ratios.
arXiv Detail & Related papers (2022-12-03T19:09:56Z) - Agreement or Disagreement in Noise-tolerant Mutual Learning? [9.890478302701315]
We propose a noise-tolerant framework named MLC in an end-to-end manner.
It adjusts the dual-network with divergent regularization to ensure the effectiveness of the mechanism.
The proposed method can utilize the noisy data to improve the accuracy, generalization, and robustness of the network.
arXiv Detail & Related papers (2022-03-29T08:00:51Z) - Synergistic Network Learning and Label Correction for Noise-robust Image
Classification [28.27739181560233]
Deep Neural Networks (DNNs) tend to overfit training label noise, resulting in poorer model performance in practice.
We propose a robust label correction framework combining the ideas of small loss selection and noise correction.
We demonstrate our method on both synthetic and real-world datasets with different noise types and rates.
arXiv Detail & Related papers (2022-02-27T23:06:31Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - CoMatch: Semi-supervised Learning with Contrastive Graph Regularization [86.84486065798735]
CoMatch is a new semi-supervised learning method that unifies dominant approaches.
It achieves state-of-the-art performance on multiple datasets.
arXiv Detail & Related papers (2020-11-23T02:54:57Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z) - DivideMix: Learning with Noisy Labels as Semi-supervised Learning [111.03364864022261]
We propose DivideMix, a framework for learning with noisy labels.
Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods.
arXiv Detail & Related papers (2020-02-18T06:20:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.