Dataset Bias Mitigation Through Analysis of CNN Training Scores
- URL: http://arxiv.org/abs/2106.14829v1
- Date: Mon, 28 Jun 2021 16:07:49 GMT
- Title: Dataset Bias Mitigation Through Analysis of CNN Training Scores
- Authors: Ekberjan Derman
- Abstract summary: We propose a novel, domain-independent approach, called score-based resampling (SBR), to locate the under-represented samples of the original training dataset.
In our method, once trained, we use the same CNN model to infer on its own training samples, obtain prediction scores, and based on the distance between predicted and ground-truth, we identify samples that are far away from their ground-truth.
The obtained results confirmed the validity of our proposed method regrading identifying under-represented samples among original dataset to decrease categorical bias of classifying certain groups.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training datasets are crucial for convolutional neural network-based
algorithms, which directly impact their overall performance. As such, using a
well-structured dataset that has minimum level of bias is always desirable. In
this paper, we proposed a novel, domain-independent approach, called
score-based resampling (SBR), to locate the under-represented samples of the
original training dataset based on the model prediction scores obtained with
that training set. In our method, once trained, we use the same CNN model to
infer on its own training samples, obtain prediction scores, and based on the
distance between predicted and ground-truth, we identify samples that are far
away from their ground-truth and augment them in the original training set. The
temperature term of the Sigmoid function is decreased to better differentiate
scores. For experimental evaluation, we selected one Kaggle dataset for gender
classification. We first used a CNN-based classifier with relatively standard
structure, trained on the training images, and evaluated on the provided
validation samples of the original dataset. Then, we assessed it on a totally
new test dataset consisting of light male, light female, dark male, and dark
female groups. The obtained accuracies varied, revealing the existence of
categorical bias against certain groups in the original dataset. Subsequently,
we trained the model after resampling based on our proposed approach. We
compared our method with a previously proposed variational autoencoder (VAE)
based algorithm. The obtained results confirmed the validity of our proposed
method regrading identifying under-represented samples among original dataset
to decrease categorical bias of classifying certain groups. Although tested for
gender classification, the proposed algorithm can be used for investigating
dataset structure of any CNN-based tasks.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Adversarial Sampling for Fairness Testing in Deep Neural Network [0.0]
adversarial sampling to test for fairness in prediction of deep neural network model across different classes of image in a given dataset.
We trained our neural network model on the original image, and without training our model on the perturbed or attacked image.
When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to.
arXiv Detail & Related papers (2023-03-06T03:55:37Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Mitigating Dataset Bias by Using Per-sample Gradient [9.290757451344673]
We propose PGD (Per-sample Gradient-based Debiasing), that comprises three steps: training a model on uniform batch sampling, setting the importance of each sample in proportion to the norm of the sample gradient, and training the model using importance-batch sampling.
Compared with existing baselines for various synthetic and real-world datasets, the proposed method showed state-of-the-art accuracy for a the classification task.
arXiv Detail & Related papers (2022-05-31T11:41:02Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Novelty-based Generalization Evaluation for Traffic Light Detection [13.487711023133764]
We evaluate the generalization ability of Convolutional Neural Networks (CNNs) by calculating various metrics on an independent test dataset.
We propose a CNN generalization scoring framework that considers novelty of objects in the test dataset.
arXiv Detail & Related papers (2022-01-03T09:23:56Z) - Robust Fairness-aware Learning Under Sample Selection Bias [17.09665420515772]
We propose a framework for robust and fair learning under sample selection bias.
We develop two algorithms to handle sample selection bias when test data is both available and unavailable.
arXiv Detail & Related papers (2021-05-24T23:23:36Z) - Statistical model-based evaluation of neural networks [74.10854783437351]
We develop an experimental setup for the evaluation of neural networks (NNs)
The setup helps to benchmark a set of NNs vis-a-vis minimum-mean-square-error (MMSE) performance bounds.
This allows us to test the effects of training data size, data dimension, data geometry, noise, and mismatch between training and testing conditions.
arXiv Detail & Related papers (2020-11-18T00:33:24Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Incremental Unsupervised Domain-Adversarial Training of Neural Networks [17.91571291302582]
In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples.
Here we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively.
Our results report a clear improvement with respect to the non-incremental case in several datasets, also outperforming other state-of-the-art domain adaptation algorithms.
arXiv Detail & Related papers (2020-01-13T09:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.