CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning
- URL: http://arxiv.org/abs/2202.05613v3
- Date: Sun, 30 Apr 2023 02:50:33 GMT
- Title: CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning
- Authors: Jun Shu, Xiang Yuan, Deyu Meng, Zongben Xu
- Abstract summary: Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
- Score: 55.733193075728096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern deep neural networks can easily overfit to biased training data
containing corrupted labels or class imbalance. Sample re-weighting methods are
popularly used to alleviate this data bias issue. Most current methods,
however, require to manually pre-specify the weighting schemes as well as their
additional hyper-parameters relying on the characteristics of the investigated
problem and training data. This makes them fairly hard to be generally applied
in practical scenarios, due to their significant complexities and inter-class
variations of data bias situations. To address this issue, we propose a
meta-model capable of adaptively learning an explicit weighting scheme directly
from data. Specifically, by seeing each training class as a separate learning
task, our method aims to extract an explicit weighting function with sample
loss and task/class feature as input, and sample weight as output, expecting to
impose adaptively varying weighting schemes to different sample classes based
on their own intrinsic bias characteristics. Synthetic and real data
experiments substantiate the capability of our method on achieving proper
weighting schemes in various data bias cases, like the class imbalance,
feature-independent and dependent label noise scenarios, and more complicated
bias scenarios beyond conventional cases. Besides, the task-transferability of
the learned weighting scheme is also substantiated, by readily deploying the
weighting function learned on relatively smaller-scale CIFAR-10 dataset on much
larger-scale full WebVision dataset. A performance gain can be readily achieved
compared with previous SOAT ones without additional hyper-parameter tuning and
meta gradient descent step. The general availability of our method for multiple
robust deep learning issues, including partial-label learning, semi-supervised
learning and selective classification, has also been validated.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Towards Accelerated Model Training via Bayesian Data Selection [45.62338106716745]
We propose a more reasonable data selection principle by examining the data's impact on the model's generalization loss.
Recent work has proposed a more reasonable data selection principle by examining the data's impact on the model's generalization loss.
This work solves these problems by leveraging a lightweight Bayesian treatment and incorporating off-the-shelf zero-shot predictors built on large-scale pre-trained models.
arXiv Detail & Related papers (2023-08-21T07:58:15Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Dynamic Loss For Robust Learning [17.33444812274523]
This work presents a novel meta-learning based dynamic loss that automatically adjusts the objective functions with the training process to robustly learn a classifier from long-tailed noisy data.
Our method achieves state-of-the-art accuracy on multiple real-world and synthetic datasets with various types of data biases, including CIFAR-10/100, Animal-10N, ImageNet-LT, and Webvision.
arXiv Detail & Related papers (2022-11-22T01:48:25Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Imbalanced Classification via Explicit Gradient Learning From Augmented
Data [0.0]
We propose a novel deep meta-learning technique to augment a given imbalanced dataset with new minority instances.
The advantage of the proposed method is demonstrated on synthetic and real-world datasets with various imbalance ratios.
arXiv Detail & Related papers (2022-02-21T22:16:50Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data [17.7825114228313]
Corrupted labels and class imbalance are commonly encountered in practically collected training data.
Existing approaches alleviate these issues by adopting a sample re-weighting strategy.
However, biased samples with corrupted labels and of tailed classes commonly co-exist in training data.
arXiv Detail & Related papers (2021-12-30T09:20:07Z) - Class-Wise Difficulty-Balanced Loss for Solving Class-Imbalance [6.875312133832079]
We propose a novel loss function named Class-wise Difficulty-Balanced loss.
It dynamically distributes weights to each sample according to the difficulty of the class that the sample belongs to.
The results show that CDB loss consistently outperforms the recently proposed loss functions on class-imbalanced datasets.
arXiv Detail & Related papers (2020-10-05T07:19:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.