Dynamic Loss For Robust Learning
- URL: http://arxiv.org/abs/2211.12506v2
- Date: Tue, 5 Sep 2023 14:30:14 GMT
- Title: Dynamic Loss For Robust Learning
- Authors: Shenwang Jiang, Jianan Li, Jizhou Zhang, Ying Wang, Tingfa Xu
- Abstract summary: This work presents a novel meta-learning based dynamic loss that automatically adjusts the objective functions with the training process to robustly learn a classifier from long-tailed noisy data.
Our method achieves state-of-the-art accuracy on multiple real-world and synthetic datasets with various types of data biases, including CIFAR-10/100, Animal-10N, ImageNet-LT, and Webvision.
- Score: 17.33444812274523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Label noise and class imbalance commonly coexist in real-world data. Previous
works for robust learning, however, usually address either one type of the data
biases and underperform when facing them both. To mitigate this gap, this work
presents a novel meta-learning based dynamic loss that automatically adjusts
the objective functions with the training process to robustly learn a
classifier from long-tailed noisy data. Concretely, our dynamic loss comprises
a label corrector and a margin generator, which respectively correct noisy
labels and generate additive per-class classification margins by perceiving the
underlying data distribution as well as the learning state of the classifier.
Equipped with a new hierarchical sampling strategy that enriches a small amount
of unbiased metadata with diverse and hard samples, the two components in the
dynamic loss are optimized jointly through meta-learning and cultivate the
classifier to well adapt to clean and balanced test data. Extensive experiments
show our method achieves state-of-the-art accuracy on multiple real-world and
synthetic datasets with various types of data biases, including CIFAR-10/100,
Animal-10N, ImageNet-LT, and Webvision. Code will soon be publicly available.
Related papers
- Meta-GCN: A Dynamically Weighted Loss Minimization Method for Dealing with the Data Imbalance in Graph Neural Networks [5.285761906707429]
We propose a meta-learning algorithm, named Meta-GCN, for adaptively learning the example weights.
We have shown that Meta-GCN outperforms state-of-the-art frameworks and other baselines in terms of accuracy.
arXiv Detail & Related papers (2024-06-24T18:59:24Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Representation Learning for the Automatic Indexing of Sound Effects
Libraries [79.68916470119743]
We show that a task-specific but dataset-independent representation can successfully address data issues such as class imbalance, inconsistent class labels, and insufficient dataset size.
Detailed experimental results show the impact of metric learning approaches and different cross-dataset training methods on representational effectiveness.
arXiv Detail & Related papers (2022-08-18T23:46:13Z) - Synergistic Network Learning and Label Correction for Noise-robust Image
Classification [28.27739181560233]
Deep Neural Networks (DNNs) tend to overfit training label noise, resulting in poorer model performance in practice.
We propose a robust label correction framework combining the ideas of small loss selection and noise correction.
We demonstrate our method on both synthetic and real-world datasets with different noise types and rates.
arXiv Detail & Related papers (2022-02-27T23:06:31Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data [17.7825114228313]
Corrupted labels and class imbalance are commonly encountered in practically collected training data.
Existing approaches alleviate these issues by adopting a sample re-weighting strategy.
However, biased samples with corrupted labels and of tailed classes commonly co-exist in training data.
arXiv Detail & Related papers (2021-12-30T09:20:07Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.