Pseudo-label Correction for Instance-dependent Noise Using
Teacher-student Framework
- URL: http://arxiv.org/abs/2311.14237v1
- Date: Fri, 24 Nov 2023 00:36:17 GMT
- Title: Pseudo-label Correction for Instance-dependent Noise Using
Teacher-student Framework
- Authors: Eugene Kim
- Abstract summary: We propose a new teacher-student based framework termed P-LC (pseudo-label correction)
In our novel approach, we reconfigure the teacher network into a triple encoder, leveraging the triplet loss to establish a pseudo-label correction system.
Experiments on MNIST, Fashion-MNIST, and SVHN demonstrate P-LC's superior performance over existing state-of-the-art methods across all noise levels.
- Score: 1.2618527387900083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high capacity of deep learning models to learn complex patterns poses a
significant challenge when confronted with label noise. The inability to
differentiate clean and noisy labels ultimately results in poor generalization.
We approach this problem by reassigning the label for each image using a new
teacher-student based framework termed P-LC (pseudo-label correction).
Traditional teacher-student networks are composed of teacher and student
classifiers for knowledge distillation. In our novel approach, we reconfigure
the teacher network into a triple encoder, leveraging the triplet loss to
establish a pseudo-label correction system. As the student generates pseudo
labels for a set of given images, the teacher learns to choose between the
initially assigned labels and the pseudo labels. Experiments on MNIST,
Fashion-MNIST, and SVHN demonstrate P-LC's superior performance over existing
state-of-the-art methods across all noise levels, most notably in high noise.
In addition, we introduce a noise level estimation to help assess model
performance and inform the need for additional data cleaning procedures.
Related papers
- Adaptive Label Correction for Robust Medical Image Segmentation with Noisy Labels [21.12128358750749]
We propose a Mean Teacher-based Adaptive Label Correction framework for robust medical image segmentation with noisy labels.
It includes an adaptive label refinement mechanism that dynamically captures and weights differences across multiple disturbance versions to enhance the quality of noisy labels.
It also incorporates a sample-level uncertainty-based label selection algorithm to prioritize high-confidence samples for network updates.
arXiv Detail & Related papers (2025-03-15T18:03:01Z) - Rethinking Pseudo-Label Guided Learning for Weakly Supervised Temporal Action Localization from the Perspective of Noise Correction [33.89781814072881]
We argue that the noise in pseudo-labels would interfere with the learning of fully-supervised detection head.
We introduce a two-stage noisy label learning strategy to harness every potential useful signal in noisy labels.
Our model outperforms the previous state-of-the-art method in detection accuracy and inference speed.
arXiv Detail & Related papers (2025-01-19T17:31:40Z) - Imbalanced Medical Image Segmentation with Pixel-dependent Noisy Labels [23.049622621090453]
We propose Collaborative Learning with Curriculum Selection (CLCS) to address pixel-dependent noisy labels with class imbalance.
CLCS consists of two modules: Curriculum Noisy Label Sample Selection (CNS) and Noise Balance Loss (NBL)
arXiv Detail & Related papers (2025-01-12T00:59:57Z) - CoDTS: Enhancing Sparsely Supervised Collaborative Perception with a Dual Teacher-Student Framework [15.538850922083652]
We propose an end-to-end Collaborative perception Dual Teacher-Student framework (CoDTS)
It employs adaptive complementary learning to produce both high-quality and high-quantity pseudo labels.
CoDTS effectively ensures an optimal balance of pseudo labels in both quality and quantity.
arXiv Detail & Related papers (2024-12-11T12:34:37Z) - Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning [8.387189407144403]
Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label)
NPLL relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem.
We present a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm.
arXiv Detail & Related papers (2024-02-07T13:32:47Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Blind Knowledge Distillation for Robust Image Classification [19.668440671541546]
Blind Knowledge Distillation is a teacher-student approach for learning with noisy labels.
We use Otsus algorithm to estimate the tipping point from generalizing to overfitting.
We show in our experiments that Blind Knowledge Distillation detects overfitting effectively during training.
arXiv Detail & Related papers (2022-11-21T11:17:07Z) - Label Matching Semi-Supervised Object Detection [85.99282969977541]
Semi-supervised object detection has made significant progress with the development of mean teacher driven self-training.
Label mismatch problem is not yet fully explored in the previous works, leading to severe confirmation bias during self-training.
We propose a simple yet effective LabelMatch framework from two different yet complementary perspectives.
arXiv Detail & Related papers (2022-06-14T05:59:41Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning
with Label Noise [6.303101074386922]
Robust Label Refurbishment (Robust LR) is a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels.
We show that our method successfully alleviates the damage of both label noise and confirmation bias.
For example, Robust LR achieves up to 4.5% absolute top-1 accuracy improvement over the previous best on the real-world noisy dataset WebVision.
arXiv Detail & Related papers (2021-12-06T12:10:17Z) - Instance-dependent Label-noise Learning under a Structural Causal Model [92.76400590283448]
Label noise will degenerate the performance of deep learning algorithms.
By leveraging a structural causal model, we propose a novel generative approach for instance-dependent label-noise learning.
arXiv Detail & Related papers (2021-09-07T10:42:54Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.