ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise
- URL: http://arxiv.org/abs/2206.07277v1
- Date: Wed, 15 Jun 2022 03:37:51 GMT
- Title: ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise
- Authors: Jongwoo Ko, Bongsoo Yi, Se-Young Yun
- Abstract summary: We propose our framework, coined as Adaptive LAbel smoothing on Sub-Cl-Assifier (ALASCA)
We derive that the label smoothing (LS) incurs implicit Lipschitz regularization (LR)
Based on these derivations, we apply the adaptive LS (ALS) on sub-classifiers architectures for the practical application of adaptive LR on intermediate layers.
- Score: 10.441880303257468
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As label noise, one of the most popular distribution shifts, severely
degrades deep neural networks' generalization performance, robust training with
noisy labels is becoming an important task in modern deep learning. In this
paper, we propose our framework, coined as Adaptive LAbel smoothing on
Sub-ClAssifier (ALASCA), that provides a robust feature extractor with
theoretical guarantee and negligible additional computation. First, we derive
that the label smoothing (LS) incurs implicit Lipschitz regularization (LR).
Furthermore, based on these derivations, we apply the adaptive LS (ALS) on
sub-classifiers architectures for the practical application of adaptive LR on
intermediate layers. We conduct extensive experiments for ALASCA and combine it
with previous noise-robust methods on several datasets and show our framework
consistently outperforms corresponding baselines.
Related papers
- SSP-RACL: Classification of Noisy Fundus Images with Self-Supervised Pretraining and Robust Adaptive Credal Loss [3.8739860035485143]
Fundus image classification is crucial in the computer aided diagnosis tasks, but label noise significantly impairs the performance of deep neural networks.
We propose a robust framework, Self-Supervised Pre-training with Robust Adaptive Credal Loss (SSP-RACL), for handling label noise in fundus image datasets.
arXiv Detail & Related papers (2024-09-25T02:41:58Z) - SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training [68.7896349660824]
We present an in-depth analysis of the progressive overfitting problem from the lens of Seq FT.
Considering that the overly fast representation learning and the biased classification layer constitute this particular problem, we introduce the advanced Slow Learner with Alignment (S++) framework.
Our approach involves a Slow Learner to selectively reduce the learning rate of backbone parameters, and a Alignment to align the disjoint classification layers in a post-hoc fashion.
arXiv Detail & Related papers (2024-08-15T17:50:07Z) - Robust Learning under Hybrid Noise [24.36707245704713]
We propose a novel unified learning framework called "Feature and Label Recovery" (FLR) to combat the hybrid noise from the perspective of data recovery.
arXiv Detail & Related papers (2024-07-04T16:13:25Z) - Fine-Grained Classification with Noisy Labels [31.128588235268126]
Learning with noisy labels (LNL) aims to ensure model generalization given a label-corrupted training set.
We investigate a rarely studied scenario of LNL on fine-grained datasets (LNL-FG)
We propose a novel framework called noise-tolerated supervised contrastive learning (SNSCL) that confronts label noise by encouraging distinguishable representation.
arXiv Detail & Related papers (2023-03-04T12:32:45Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Class-Aware Contrastive Semi-Supervised Learning [51.205844705156046]
We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
arXiv Detail & Related papers (2022-03-04T12:18:23Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Weakly Supervised Label Smoothing [15.05158252504978]
We study Label Smoothing (LS), a widely used regularization technique, in the context of neural learning to rank (L2R) models.
Inspired by our investigation of LS in the context of neural L2R models, we propose a novel technique called Weakly Supervised Label Smoothing (WSLS)
arXiv Detail & Related papers (2020-12-15T19:36:52Z) - Delving Deep into Label Smoothing [112.24527926373084]
Label smoothing is an effective regularization tool for deep neural networks (DNNs)
We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model prediction for the target category.
arXiv Detail & Related papers (2020-11-25T08:03:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.