Model and Data Agreement for Learning with Noisy Labels
- URL: http://arxiv.org/abs/2212.01054v1
- Date: Fri, 2 Dec 2022 09:46:26 GMT
- Title: Model and Data Agreement for Learning with Noisy Labels
- Authors: Yuhang Zhang, Weihong Deng, Xingchen Cui, Yunfeng Yin, Hongzhi Shi,
Dongchao Wen
- Abstract summary: In this paper, we try to deal with error accumulation in noisy label learning from both model and data perspectives.
We introduce mean point ensemble to utilize a more robust loss function and more information from unselected samples to reduce error accumulation from the model perspective.
Our method outperforms state-of-the-art noisy label learning methods with different levels of label noise.
- Score: 37.17276277573844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning with noisy labels is a vital topic for practical deep learning as
models should be robust to noisy open-world datasets in the wild. The
state-of-the-art noisy label learning approach JoCoR fails when faced with a
large ratio of noisy labels. Moreover, selecting small-loss samples can also
cause error accumulation as once the noisy samples are mistakenly selected as
small-loss samples, they are more likely to be selected again. In this paper,
we try to deal with error accumulation in noisy label learning from both model
and data perspectives. We introduce mean point ensemble to utilize a more
robust loss function and more information from unselected samples to reduce
error accumulation from the model perspective. Furthermore, as the flip images
have the same semantic meaning as the original images, we select small-loss
samples according to the loss values of flip images instead of the original
ones to reduce error accumulation from the data perspective. Extensive
experiments on CIFAR-10, CIFAR-100, and large-scale Clothing1M show that our
method outperforms state-of-the-art noisy label learning methods with different
levels of label noise. Our method can also be seamlessly combined with other
noisy label learning methods to further improve their performance and
generalize well to other tasks. The code is available in
https://github.com/zyh-uaiaaaa/MDA-noisy-label-learning.
Related papers
- Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Co-Learning Meets Stitch-Up for Noisy Multi-label Visual Recognition [70.00984078351927]
This paper focuses on reducing noise based on some inherent properties of multi-label classification and long-tailed learning under noisy cases.
We propose a Stitch-Up augmentation to synthesize a cleaner sample, which directly reduces multi-label noise.
A Heterogeneous Co-Learning framework is further designed to leverage the inconsistency between long-tailed and balanced distributions.
arXiv Detail & Related papers (2023-07-03T09:20:28Z) - Learning advisor networks for noisy image classification [22.77447144331876]
We introduce the novel concept of advisor network to address the problem of noisy labels in image classification.
We trained it with a meta-learning strategy so that it can adapt throughout the training of the main model.
We tested our method on CIFAR10 and CIFAR100 with synthetic noise, and on Clothing1M which contains real-world noise, reporting state-of-the-art results.
arXiv Detail & Related papers (2022-11-08T11:44:08Z) - Label-Noise Learning with Intrinsically Long-Tailed Data [65.41318436799993]
We propose a learning framework for label-noise learning with intrinsically long-tailed data.
Specifically, we propose two-stage bi-dimensional sample selection (TABASCO) to better separate clean samples from noisy samples.
arXiv Detail & Related papers (2022-08-21T07:47:05Z) - UNICON: Combating Label Noise Through Uniform Selection and Contrastive
Learning [89.56465237941013]
We propose UNICON, a simple yet effective sample selection method which is robust to high label noise.
We obtain an 11.4% improvement over the current state-of-the-art on CIFAR100 dataset with a 90% noise rate.
arXiv Detail & Related papers (2022-03-28T07:36:36Z) - PARS: Pseudo-Label Aware Robust Sample Selection for Learning with Noisy
Labels [5.758073912084364]
We propose PARS: Pseudo-Label Aware Robust Sample Selection.
PARS exploits all training samples using both the raw/noisy labels and estimated/refurbished pseudo-labels via self-training.
Results show that PARS significantly outperforms the state of the art on extensive studies on the noisy CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2022-01-26T09:31:55Z) - Learning from Noisy Labels for Entity-Centric Information Extraction [17.50856935207308]
We propose a simple co-regularization framework for entity-centric information extraction.
These models are jointly optimized with task-specific loss, and are regularized to generate similar predictions.
In the end, we can take any of the trained models for inference.
arXiv Detail & Related papers (2021-04-17T22:49:12Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.