Mitigating Algorithmic Bias in Multiclass CNN Classifications Using Causal Modeling
- URL: http://arxiv.org/abs/2501.07885v1
- Date: Tue, 14 Jan 2025 06:51:27 GMT
- Title: Mitigating Algorithmic Bias in Multiclass CNN Classifications Using Causal Modeling
- Authors: Min Sik Byun, Wendy Wan Yee Hui, Wai Kwong Lau,
- Abstract summary: This study describes a procedure for applying causal modeling to detect and mitigate algorithmic bias in a classification problem.
The dataset was derived from the FairFace dataset, supplemented with emotional labels generated by the DeepFace pre-trained model.
The resulting debiased classifications demonstrated enhanced gender fairness across all classes, with negligible impact--or even a slight improvement--on overall accuracy.
- Score: 0.0
- License:
- Abstract: This study describes a procedure for applying causal modeling to detect and mitigate algorithmic bias in a multiclass classification problem. The dataset was derived from the FairFace dataset, supplemented with emotional labels generated by the DeepFace pre-trained model. A custom Convolutional Neural Network (CNN) was developed, consisting of four convolutional blocks, followed by fully connected layers and dropout layers to mitigate overfitting. Gender bias was identified in the CNN model's classifications: Females were more likely to be classified as "happy" or "sad," while males were more likely to be classified as "neutral." To address this, the one-vs-all (OvA) technique was applied. A causal model was constructed for each emotion class to adjust the CNN model's predicted class probabilities. The adjusted probabilities for the various classes were then aggregated by selecting the class with the highest probability. The resulting debiased classifications demonstrated enhanced gender fairness across all classes, with negligible impact--or even a slight improvement--on overall accuracy. This study highlights that algorithmic fairness and accuracy are not necessarily trade-offs. All data and code for this study are publicly available for download.
Related papers
- Open-World Semi-Supervised Learning for Node Classification [53.07866559269709]
Open-world semi-supervised learning (Open-world SSL) for node classification is a practical but under-explored problem in the graph community.
We propose an IMbalance-Aware method named OpenIMA for Open-world semi-supervised node classification.
arXiv Detail & Related papers (2024-03-18T05:12:54Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Mitigating Nonlinear Algorithmic Bias in Binary Classification [0.0]
This paper proposes the use of causal modeling to detect and mitigate bias that is nonlinear in the protected attribute.
We show that the probability of getting correctly classified as "low risk" is lowest among young people.
Based on the fitted causal model, the debiased- probability estimates are computed, showing improved fairness with little impact on overall accuracy.
arXiv Detail & Related papers (2023-12-09T01:26:22Z) - Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases [62.54519787811138]
We present a simple but effective method to measure and mitigate model biases caused by reliance on spurious cues.
We rank images within their classes based on spuriosity, proxied via deep neural features of an interpretable network.
Our results suggest that model bias due to spurious feature reliance is influenced far more by what the model is trained on than how it is trained.
arXiv Detail & Related papers (2022-12-05T23:15:43Z) - Quantifying Human Bias and Knowledge to guide ML models during Training [0.0]
We introduce an experimental approach to dealing with skewed datasets by including humans in the training process.
We ask humans to rank the importance of features of the dataset, and through rank aggregation, determine the initial weight bias for the model.
We show that collective human bias can allow ML models to learn insights about the true population instead of the biased sample.
arXiv Detail & Related papers (2022-11-19T20:49:07Z) - Understanding CNN Fragility When Learning With Imbalanced Data [1.1444576186559485]
Convolutional neural networks (CNNs) have achieved impressive results on imbalanced image data, but they still have difficulty generalizing to minority classes.
We focus on their latent features to demystify CNN decisions on imbalanced data.
We show that important information regarding the ability of a neural network to generalize to minority classes resides in the class top-K CE and FE.
arXiv Detail & Related papers (2022-10-17T22:40:06Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To
Reduce Model Bias [10.639605996067534]
Contextual information is a valuable cue for Deep Neural Networks (DNNs) to learn better representations and improve accuracy.
In COCO, many object categories have a much higher co-occurrence with men compared to women, which can bias a DNN's prediction in favor of men.
We introduce a data repair algorithm using the coefficient of variation, which can curate fair and contextually balanced data for a protected class.
arXiv Detail & Related papers (2021-10-20T06:00:03Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - VaB-AL: Incorporating Class Imbalance and Difficulty with Variational
Bayes for Active Learning [38.33920705605981]
We propose a method that can naturally incorporate class imbalance into the Active Learning framework.
We show that our method can be applied to tasks classification on multiple different datasets.
arXiv Detail & Related papers (2020-03-25T07:34:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.