Towards Learning an Unbiased Classifier from Biased Data via Conditional
Adversarial Debiasing
- URL: http://arxiv.org/abs/2103.06179v1
- Date: Wed, 10 Mar 2021 16:50:42 GMT
- Title: Towards Learning an Unbiased Classifier from Biased Data via Conditional
Adversarial Debiasing
- Authors: Christian Reimers and Paul Bodesheim and Jakob Runge and Joachim
Denzler
- Abstract summary: We present a novel adversarial debiasing method, which addresses a feature that is spuriously connected to the labels of training images.
We argue by a mathematical proof that our approach is superior to existing techniques for the abovementioned bias.
Our experiments show that our approach performs better than state-of-the-art techniques on a well-known benchmark dataset with real-world images of cats and dogs.
- Score: 17.113618920885187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bias in classifiers is a severe issue of modern deep learning methods,
especially for their application in safety- and security-critical areas. Often,
the bias of a classifier is a direct consequence of a bias in the training
dataset, frequently caused by the co-occurrence of relevant features and
irrelevant ones. To mitigate this issue, we require learning algorithms that
prevent the propagation of bias from the dataset into the classifier. We
present a novel adversarial debiasing method, which addresses a feature that is
spuriously connected to the labels of training images but statistically
independent of the labels for test images. Thus, the automatic identification
of relevant features during training is perturbed by irrelevant features. This
is the case in a wide range of bias-related problems for many computer vision
tasks, such as automatic skin cancer detection or driver assistance. We argue
by a mathematical proof that our approach is superior to existing techniques
for the abovementioned bias. Our experiments show that our approach performs
better than state-of-the-art techniques on a well-known benchmark dataset with
real-world images of cats and dogs.
Related papers
- DCAST: Diverse Class-Aware Self-Training Mitigates Selection Bias for Fairer Learning [0.0]
bias unascribed to sensitive features is challenging to identify and typically goes undiagnosed.
Strategies to mitigate unidentified bias and evaluate mitigation methods are crucially needed, yet remain underexplored.
We introduce Diverse Class-Aware Self-Training (DCAST), model-agnostic mitigation aware of class-specific bias.
arXiv Detail & Related papers (2024-09-30T09:26:19Z) - Language-guided Detection and Mitigation of Unknown Dataset Bias [23.299264313976213]
We propose a framework to identify potential biases as keywords without prior knowledge based on the partial occurrence in the captions.
Our framework not only outperforms existing methods without prior knowledge, but also is even comparable with a method that assumes prior knowledge.
arXiv Detail & Related papers (2024-06-05T03:11:33Z) - Mitigating Bias Using Model-Agnostic Data Attribution [2.9868610316099335]
Mitigating bias in machine learning models is a critical endeavor for ensuring fairness and equity.
We propose a novel approach to address bias by leveraging pixel image attributions to identify and regularize regions of images containing bias attributes.
arXiv Detail & Related papers (2024-05-08T13:00:56Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Learning Debiased Classifier with Biased Committee [30.417623580157834]
Neural networks are prone to be biased towards spurious correlations between classes and latent attributes exhibited in a major portion of training data.
We propose a new method for training debiased classifiers with no spurious attribute label.
On five real-world datasets, our method outperforms prior arts using no spurious attribute label like ours and even surpasses those relying on bias labels occasionally.
arXiv Detail & Related papers (2022-06-22T04:50:28Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - Debiased Pseudo Labeling in Self-Training [77.83549261035277]
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets.
To mitigate the requirement for labeled data, self-training is widely used in both academia and industry by pseudo labeling on readily-available unlabeled data.
We propose Debiased, in which the generation and utilization of pseudo labels are decoupled by two independent heads.
arXiv Detail & Related papers (2022-02-15T02:14:33Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.