Co-matching: Combating Noisy Labels by Augmentation Anchoring
- URL: http://arxiv.org/abs/2103.12814v1
- Date: Tue, 23 Mar 2021 20:00:13 GMT
- Title: Co-matching: Combating Noisy Labels by Augmentation Anchoring
- Authors: Yangdi Lu, Yang Bo, Wenbo He
- Abstract summary: We propose a learning algorithm called Co-matching, which balances the consistency and divergence between two networks by augmentation anchoring.
Experiments on three benchmark datasets demonstrate that Co-matching achieves results comparable to the state-of-the-art methods.
- Score: 2.0349696181833337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning with noisy labels is challenging as deep neural networks have
the high capacity to memorize the noisy labels. In this paper, we propose a
learning algorithm called Co-matching, which balances the consistency and
divergence between two networks by augmentation anchoring. Specifically, we
have one network generate anchoring label from its prediction on a
weakly-augmented image. Meanwhile, we force its peer network, taking the
strongly-augmented version of the same image as input, to generate prediction
close to the anchoring label. We then update two networks simultaneously by
selecting small-loss instances to minimize both unsupervised matching loss
(i.e., measure the consistency of the two networks) and supervised
classification loss (i.e. measure the classification performance). Besides, the
unsupervised matching loss makes our method not heavily rely on noisy labels,
which prevents memorization of noisy labels. Experiments on three benchmark
datasets demonstrate that Co-matching achieves results comparable to the
state-of-the-art methods.
Related papers
- JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - CrossSplit: Mitigating Label Noise Memorization through Data Splitting [25.344386272010397]
We propose a novel training procedure to mitigate the memorization of noisy labels, called CrossSplit.
Experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and mini-WebVision datasets demonstrate that our method can outperform the current state-of-the-art in a wide range of noise ratios.
arXiv Detail & Related papers (2022-12-03T19:09:56Z) - Agreement or Disagreement in Noise-tolerant Mutual Learning? [9.890478302701315]
We propose a noise-tolerant framework named MLC in an end-to-end manner.
It adjusts the dual-network with divergent regularization to ensure the effectiveness of the mechanism.
The proposed method can utilize the noisy data to improve the accuracy, generalization, and robustness of the network.
arXiv Detail & Related papers (2022-03-29T08:00:51Z) - CLS: Cross Labeling Supervision for Semi-Supervised Learning [9.929229055862491]
Cross Labeling Supervision ( CLS) is a framework that generalizes the typical pseudo-labeling process.
CLS allows the creation of both pseudo and complementary labels to support both positive and negative learning.
arXiv Detail & Related papers (2022-02-17T08:09:40Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Rank-Consistency Deep Hashing for Scalable Multi-Label Image Search [90.30623718137244]
We propose a novel deep hashing method for scalable multi-label image search.
A new rank-consistency objective is applied to align the similarity orders from two spaces.
A powerful loss function is designed to penalize the samples whose semantic similarity and hamming distance are mismatched.
arXiv Detail & Related papers (2021-02-02T13:46:58Z) - Co-Seg: An Image Segmentation Framework Against Label Corruption [8.219887855003648]
Supervised deep learning performance is heavily tied to the availability of high-quality labels for training.
We propose a novel framework, namely Co-Seg, to collaboratively train segmentation networks on datasets which include low-quality noisy labels.
Our framework can be easily implemented in any segmentation algorithm to increase its robustness to noisy labels.
arXiv Detail & Related papers (2021-01-31T20:01:40Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
Unsupervised Domain Adaptation in Person Re-identification and Beyond [97.25179345878443]
This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB)
AWB can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels.
Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks.
arXiv Detail & Related papers (2020-06-11T15:40:40Z) - DivideMix: Learning with Noisy Labels as Semi-supervised Learning [111.03364864022261]
We propose DivideMix, a framework for learning with noisy labels.
Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods.
arXiv Detail & Related papers (2020-02-18T06:20:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.