On Robust Learning from Noisy Labels: A Permutation Layer Approach
- URL: http://arxiv.org/abs/2211.15890v1
- Date: Tue, 29 Nov 2022 03:01:48 GMT
- Title: On Robust Learning from Noisy Labels: A Permutation Layer Approach
- Authors: Salman Alsubaihi, Mohammed Alkhrashi, Raied Aljadaany, Fahad Albalawi,
Bernard Ghanem
- Abstract summary: This paper introduces a permutation layer learning approach termed PermLL to dynamically calibrate the training process of a deep neural network (DNN)
We provide two variants of PermLL in this paper: one applies the permutation layer to the model's prediction, while the other applies it directly to the given noisy label.
We validate PermLL experimentally and show that it achieves state-of-the-art performance on both real and synthetic datasets.
- Score: 53.798757734297986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existence of label noise imposes significant challenges (e.g., poor
generalization) on the training process of deep neural networks (DNN). As a
remedy, this paper introduces a permutation layer learning approach termed
PermLL to dynamically calibrate the training process of the DNN subject to
instance-dependent and instance-independent label noise. The proposed method
augments the architecture of a conventional DNN by an instance-dependent
permutation layer. This layer is essentially a convex combination of
permutation matrices that is dynamically calibrated for each sample. The
primary objective of the permutation layer is to correct the loss of noisy
samples mitigating the effect of label noise. We provide two variants of PermLL
in this paper: one applies the permutation layer to the model's prediction,
while the other applies it directly to the given noisy label. In addition, we
provide a theoretical comparison between the two variants and show that
previous methods can be seen as one of the variants. Finally, we validate
PermLL experimentally and show that it achieves state-of-the-art performance on
both real and synthetic datasets.
Related papers
- Foster Adaptivity and Balance in Learning with Noisy Labels [26.309508654960354]
We propose a novel approach named textbfSED to deal with label noise in a textbfSelf-adaptivtextbfE and class-balancetextbfD manner.
A mean-teacher model is then employed to correct labels of noisy samples.
We additionally propose a self-adaptive and class-balanced sample re-weighting mechanism to assign different weights to detected noisy samples.
arXiv Detail & Related papers (2024-07-03T03:10:24Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Learning with Noisy Labels Using Collaborative Sample Selection and
Contrastive Semi-Supervised Learning [76.00798972439004]
Collaborative Sample Selection (CSS) removes noisy samples from identified clean set.
We introduce a co-training mechanism with a contrastive loss in semi-supervised learning.
arXiv Detail & Related papers (2023-10-24T05:37:20Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Instance-dependent Label Distribution Estimation for Learning with Label Noise [20.479674500893303]
Noise transition matrix (NTM) estimation is a promising approach for learning with label noise.
We propose an Instance-dependent Label Distribution Estimation (ILDE) method to learn from noisy labels for image classification.
Our results indicate that the proposed ILDE method outperforms all competing methods, no matter whether the noise is synthetic or real noise.
arXiv Detail & Related papers (2022-12-16T10:13:25Z) - Learning from Noisy Labels with Coarse-to-Fine Sample Credibility
Modeling [22.62790706276081]
Training deep neural network (DNN) with noisy labels is practically challenging.
Previous efforts tend to handle part or full data in a unified denoising flow.
We propose a coarse-to-fine robust learning method called CREMA to handle noisy data in a divide-and-conquer manner.
arXiv Detail & Related papers (2022-08-23T02:06:38Z) - An Ensemble Noise-Robust K-fold Cross-Validation Selection Method for
Noisy Labels [0.9699640804685629]
Large-scale datasets tend to contain mislabeled samples that can be memorized by deep neural networks (DNNs)
We present Ensemble Noise-robust K-fold Cross-Validation Selection (E-NKCVS) to effectively select clean samples from noisy data.
We evaluate our approach on various image and text classification tasks where the labels have been manually corrupted with different noise ratios.
arXiv Detail & Related papers (2021-07-06T02:14:52Z) - Learning Noise Transition Matrix from Only Noisy Labels via Total
Variation Regularization [88.91872713134342]
We propose a theoretically grounded method that can estimate the noise transition matrix and learn a classifier simultaneously.
We show the effectiveness of the proposed method through experiments on benchmark and real-world datasets.
arXiv Detail & Related papers (2021-02-04T05:09:18Z) - Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection [54.98042023365694]
We propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples.
The proposed model consists of two sub-models parameterized by neural networks.
arXiv Detail & Related papers (2020-07-23T18:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.