Fine-Grained Classification with Noisy Labels
- URL: http://arxiv.org/abs/2303.02404v1
- Date: Sat, 4 Mar 2023 12:32:45 GMT
- Title: Fine-Grained Classification with Noisy Labels
- Authors: Qi Wei, Lei Feng, Haoliang Sun, Ren Wang, Chenhui Guo, Yilong Yin
- Abstract summary: Learning with noisy labels (LNL) aims to ensure model generalization given a label-corrupted training set.
We investigate a rarely studied scenario of LNL on fine-grained datasets (LNL-FG)
We propose a novel framework called noise-tolerated supervised contrastive learning (SNSCL) that confronts label noise by encouraging distinguishable representation.
- Score: 31.128588235268126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning with noisy labels (LNL) aims to ensure model generalization given a
label-corrupted training set. In this work, we investigate a rarely studied
scenario of LNL on fine-grained datasets (LNL-FG), which is more practical and
challenging as large inter-class ambiguities among fine-grained classes cause
more noisy labels. We empirically show that existing methods that work well for
LNL fail to achieve satisfying performance for LNL-FG, arising the practical
need of effective solutions for LNL-FG. To this end, we propose a novel
framework called stochastic noise-tolerated supervised contrastive learning
(SNSCL) that confronts label noise by encouraging distinguishable
representation. Specifically, we design a noise-tolerated supervised
contrastive learning loss that incorporates a weight-aware mechanism for noisy
label correction and selectively updating momentum queue lists. By this
mechanism, we mitigate the effects of noisy anchors and avoid inserting noisy
labels into the momentum-updated queue. Besides, to avoid manually-defined
augmentation strategies in contrastive learning, we propose an efficient
stochastic module that samples feature embeddings from a generated
distribution, which can also enhance the representation ability of deep models.
SNSCL is general and compatible with prevailing robust LNL strategies to
improve their performance for LNL-FG. Extensive experiments demonstrate the
effectiveness of SNSCL.
Related papers
- ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Learning with Noisy Labels Using Collaborative Sample Selection and
Contrastive Semi-Supervised Learning [76.00798972439004]
Collaborative Sample Selection (CSS) removes noisy samples from identified clean set.
We introduce a co-training mechanism with a contrastive loss in semi-supervised learning.
arXiv Detail & Related papers (2023-10-24T05:37:20Z) - BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise
Learning [113.8799653759137]
We introduce a novel label noise type called BadLabel, which can significantly degrade the performance of existing LNL algorithms by a large margin.
BadLabel is crafted based on the label-flipping attack against standard classification.
We propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
arXiv Detail & Related papers (2023-05-28T06:26:23Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Towards Harnessing Feature Embedding for Robust Learning with Noisy
Labels [44.133307197696446]
The memorization effect of deep neural networks (DNNs) plays a pivotal role in recent label noise learning methods.
We propose a novel feature embedding-based method for deep learning with label noise, termed LabEl NoiseDilution (LEND)
arXiv Detail & Related papers (2022-06-27T02:45:09Z) - ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise [10.441880303257468]
We propose our framework, coined as Adaptive LAbel smoothing on Sub-Cl-Assifier (ALASCA)
We derive that the label smoothing (LS) incurs implicit Lipschitz regularization (LR)
Based on these derivations, we apply the adaptive LS (ALS) on sub-classifiers architectures for the practical application of adaptive LR on intermediate layers.
arXiv Detail & Related papers (2022-06-15T03:37:51Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Influential Rank: A New Perspective of Post-training for Robust Model
against Noisy Labels [23.80449026013167]
We propose a new approach for learning from noisy labels (LNL) via post-training.
We exploit the overfitting property of a trained model to identify mislabeled samples.
Our post-training approach creates great synergies when combined with the existing LNL methods.
arXiv Detail & Related papers (2021-06-14T08:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.