Agreement or Disagreement in Noise-tolerant Mutual Learning?
- URL: http://arxiv.org/abs/2203.15317v1
- Date: Tue, 29 Mar 2022 08:00:51 GMT
- Title: Agreement or Disagreement in Noise-tolerant Mutual Learning?
- Authors: Jiarun Liu, Daguang Jiang, Yukun Yang, Ruirui Li
- Abstract summary: We propose a noise-tolerant framework named MLC in an end-to-end manner.
It adjusts the dual-network with divergent regularization to ensure the effectiveness of the mechanism.
The proposed method can utilize the noisy data to improve the accuracy, generalization, and robustness of the network.
- Score: 9.890478302701315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has made many remarkable achievements in many fields but
suffers from noisy labels in datasets. The state-of-the-art learning with noisy
label method Co-teaching and Co-teaching+ confronts the noisy label by
mutual-information between dual-network. However, the dual network always tends
to convergent which would weaken the dual-network mechanism to resist the noisy
labels. In this paper, we proposed a noise-tolerant framework named MLC in an
end-to-end manner. It adjusts the dual-network with divergent regularization to
ensure the effectiveness of the mechanism. In addition, we correct the label
distribution according to the agreement between dual-networks. The proposed
method can utilize the noisy data to improve the accuracy, generalization, and
robustness of the network. We test the proposed method on the simulate noisy
dataset MNIST, CIFAR-10, and the real-world noisy dataset Clothing1M. The
experimental result shows that our method outperforms the previous
state-of-the-art method. Besides, our method is network-free thus it is
applicable to many tasks.
Related papers
- Dual Clustering Co-teaching with Consistent Sample Mining for
Unsupervised Person Re-Identification [13.65131691012468]
In unsupervised person Re-ID, peer-teaching strategy leveraging two networks to facilitate training has been proven to be an effective method to deal with the pseudo label noise.
This paper proposes a novel Dual Clustering Co-teaching (DCCT) approach to handle this issue.
DCCT mainly exploits the features extracted by two networks to generate two sets of pseudo labels separately by clustering with different parameters.
arXiv Detail & Related papers (2022-10-07T06:04:04Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - Noise-Tolerant Learning for Audio-Visual Action Recognition [31.641972732424463]
Video datasets are usually coarse-annotated or collected from the Internet.
We propose a noise-tolerant learning framework to find anti-interference model parameters against both noisy labels and noisy correspondence.
Our method significantly improves the robustness of the action recognition model and surpasses the baselines by a clear margin.
arXiv Detail & Related papers (2022-05-16T12:14:03Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Co-matching: Combating Noisy Labels by Augmentation Anchoring [2.0349696181833337]
We propose a learning algorithm called Co-matching, which balances the consistency and divergence between two networks by augmentation anchoring.
Experiments on three benchmark datasets demonstrate that Co-matching achieves results comparable to the state-of-the-art methods.
arXiv Detail & Related papers (2021-03-23T20:00:13Z) - Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
Unsupervised Domain Adaptation in Person Re-identification and Beyond [97.25179345878443]
This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB)
AWB can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels.
Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks.
arXiv Detail & Related papers (2020-06-11T15:40:40Z) - Combating noisy labels by agreement: A joint training method with
co-regularization [27.578738673827658]
We propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training.
We show that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.
arXiv Detail & Related papers (2020-03-05T16:42:41Z) - DivideMix: Learning with Noisy Labels as Semi-supervised Learning [111.03364864022261]
We propose DivideMix, a framework for learning with noisy labels.
Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods.
arXiv Detail & Related papers (2020-02-18T06:20:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.