PLReMix: Combating Noisy Labels with Pseudo-Label Relaxed Contrastive Representation Learning
- URL: http://arxiv.org/abs/2402.17589v2
- Date: Tue, 26 Nov 2024 16:45:46 GMT
- Title: PLReMix: Combating Noisy Labels with Pseudo-Label Relaxed Contrastive Representation Learning
- Authors: Xiaoyu Liu, Beitong Zhou, Zuogong Yue, Cheng Cheng,
- Abstract summary: We propose an end-to-end textbfPLReMix framework by introducing a Pseudo-Label Relaxed (PLR) contrastive loss.
The proposed PLR loss is pluggable and we have integrated it into other LNL methods, observing their improved performance.
- Score: 7.556169113399857
- License:
- Abstract: Recently, the usage of Contrastive Representation Learning (CRL) as a pre-training technique improves the performance of learning with noisy labels (LNL) methods. However, instead of pre-training, when trivially combining CRL loss with LNL methods as an end-to-end framework, the empirical experiments show severe degeneration of the performance. We verify through experiments that this issue is caused by optimization conflicts of losses and propose an end-to-end \textbf{PLReMix} framework by introducing a Pseudo-Label Relaxed (PLR) contrastive loss. This PLR loss constructs a reliable negative set of each sample by filtering out its inappropriate negative pairs, alleviating the loss conflicts by trivially combining these losses. The proposed PLR loss is pluggable and we have integrated it into other LNL methods, observing their improved performance. Furthermore, a two-dimensional Gaussian Mixture Model is adopted to distinguish clean and noisy samples by leveraging semantic information and model outputs simultaneously. Experiments on multiple benchmark datasets demonstrate the effectiveness of the proposed method. Code is available at \url{https://github.com/lxysl/PLReMix}.
Related papers
- Fast Semisupervised Unmixing Using Nonconvex Optimization [80.11512905623417]
We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
arXiv Detail & Related papers (2024-01-23T10:07:41Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Regroup Median Loss for Combating Label Noise [19.51996047333779]
Deep model training requires large-scale datasets of annotated data.
Due to the difficulty of annotating a large number of samples, label noise caused by incorrect annotations is inevitable.
We propose Regroup Median Loss (RML) to reduce the probability of selecting noisy samples and correct losses of noisy samples.
arXiv Detail & Related papers (2023-12-11T10:19:55Z) - Learning with Noisy Labels Using Collaborative Sample Selection and
Contrastive Semi-Supervised Learning [76.00798972439004]
Collaborative Sample Selection (CSS) removes noisy samples from identified clean set.
We introduce a co-training mechanism with a contrastive loss in semi-supervised learning.
arXiv Detail & Related papers (2023-10-24T05:37:20Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Rethinking Collaborative Metric Learning: Toward an Efficient
Alternative without Negative Sampling [156.7248383178991]
Collaborative Metric Learning (CML) paradigm has aroused wide interest in the area of recommendation systems (RS)
We find that negative sampling would lead to a biased estimation of the generalization error.
Motivated by this, we propose an efficient alternative without negative sampling for CML named textitSampling-Free Collaborative Metric Learning (SFCML)
arXiv Detail & Related papers (2022-06-23T08:50:22Z) - Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression [12.415463205960156]
We introduce Batch Inverse-Variance, a loss function which is robust to near-ground truth samples, and allows to control the effective learning rate.
Our experimental results show that BIV improves significantly the performance of the networks on two noisy datasets.
arXiv Detail & Related papers (2021-07-09T15:39:31Z) - Two-stage Training for Learning from Label Proportions [18.78148397471913]
Learning from label proportions (LLP) aims at learning an instance-level classifier with label proportions in grouped training data.
We introduce the mixup strategy and symmetric crossentropy to further reduce the label noise.
Our framework is model-agnostic, and demonstrates compelling performance improvement in extensive experiments.
arXiv Detail & Related papers (2021-05-22T03:55:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.