Fairness and bias correction in machine learning for depression
prediction: results from four study populations
- URL: http://arxiv.org/abs/2211.05321v3
- Date: Thu, 26 Oct 2023 09:51:39 GMT
- Title: Fairness and bias correction in machine learning for depression
prediction: results from four study populations
- Authors: Vien Ngoc Dang, Anna Cascarano, Rosa H. Mulder, Charlotte Cecil, Maria
A. Zuluaga, Jer\'onimo Hern\'andez-Gonz\'alez, Karim Lekadir
- Abstract summary: We present a systematic study of bias in machine learning models designed to predict depression.
We find that standard ML approaches show regularly biased behaviors.
We also show that mitigation techniques, both standard and our own post-hoc method, can be effective in reducing the level of unfair bias.
- Score: 3.3136009643108038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A significant level of stigma and inequality exists in mental healthcare,
especially in under-served populations. Inequalities are reflected in the data
collected for scientific purposes. When not properly accounted for, machine
learning (ML) models leart from data can reinforce these structural
inequalities or biases. Here, we present a systematic study of bias in ML
models designed to predict depression in four different case studies covering
different countries and populations. We find that standard ML approaches show
regularly biased behaviors. We also show that mitigation techniques, both
standard and our own post-hoc method, can be effective in reducing the level of
unfair bias. No single best ML model for depression prediction provides
equality of outcomes. This emphasizes the importance of analyzing fairness
during model selection and transparent reporting about the impact of debiasing
interventions. Finally, we provide practical recommendations to develop
bias-aware ML models for depression risk prediction.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Is Your Model "MADD"? A Novel Metric to Evaluate Algorithmic Fairness
for Predictive Student Models [0.0]
We propose a novel metric, the Model Absolute Density Distance (MADD), to analyze models' discriminatory behaviors.
We evaluate our approach on the common task of predicting student success in online courses, using several common predictive classification models.
arXiv Detail & Related papers (2023-05-24T16:55:49Z) - Connecting Fairness in Machine Learning with Public Health Equity [0.0]
biases in data and model design can result in disparities for certain protected groups and amplify existing inequalities in healthcare.
This study summarizes seminal literature on ML fairness and presents a framework for identifying and mitigating biases in the data and model.
Case studies suggest how the framework can be used to prevent these biases and highlight the need for fair and equitable ML models in public health.
arXiv Detail & Related papers (2023-04-08T10:21:49Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Evaluating the Fairness of Deep Learning Uncertainty Estimates in
Medical Image Analysis [3.5536769591744557]
Deep learning (DL) models have shown great success in many medical image analysis tasks.
However, deployment of the resulting models into real clinical contexts requires robustness and fairness across different sub-populations.
Recent studies have shown significant biases in DL models across demographic subgroups, indicating a lack of fairness in the models.
arXiv Detail & Related papers (2023-03-06T16:01:30Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems [46.93320580613236]
We present a simple, yet effective method based on normalisation (FaiReg) for regression problems.
We compare it with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - Assessing Social Determinants-Related Performance Bias of Machine
Learning Models: A case of Hyperchloremia Prediction in ICU Population [6.8473641147443995]
We evaluated four classifiers built to predict Hyperchloremia, a condition that often results from aggressive fluids administration in the ICU population.
We observed that adding social determinants features in addition to the lab-based ones improved model performance on all patients.
We urge future researchers to design models that proactively adjust for potential biases and include subgroup reporting.
arXiv Detail & Related papers (2021-11-18T03:58:50Z) - Statistical inference for individual fairness [24.622418924551315]
We focus on the problem of detecting violations of individual fairness in machine learning models.
We develop a suite of inference tools for the adversarial cost function.
We demonstrate the utility of our tools in a real-world case study.
arXiv Detail & Related papers (2021-03-30T22:49:25Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.