Bias Discovery in Machine Learning Models for Mental Health
- URL: http://arxiv.org/abs/2205.12093v1
- Date: Tue, 24 May 2022 14:17:26 GMT
- Title: Bias Discovery in Machine Learning Models for Mental Health
- Authors: Pablo Mosteiro and Jesse Kuiper and Judith Masthoff and Floortje
Scheepers and Marco Spruit
- Abstract summary: Fairness and bias are crucial concepts in artificial intelligence, yet they are relatively ignored in machine learning applications in psychiatry.
We computed fairness metrics and present bias mitigation strategies using a model trained on clinical mental health data.
This is the first application of bias exploration and mitigation in a machine learning model trained on real clinical psychiatry data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness and bias are crucial concepts in artificial intelligence, yet they
are relatively ignored in machine learning applications in clinical psychiatry.
We computed fairness metrics and present bias mitigation strategies using a
model trained on clinical mental health data. We collected structured data
related to the admission, diagnosis, and treatment of patients in the
psychiatry department of the University Medical Center Utrecht. We trained a
machine learning model to predict future administrations of benzodiazepines on
the basis of past data. We found that gender plays an unexpected role in the
predictions-this constitutes bias. Using the AI Fairness 360 package, we
implemented reweighing and discrimination-aware regularization as bias
mitigation strategies, and we explored their implications for model
performance. This is the first application of bias exploration and mitigation
in a machine learning model trained on real clinical psychiatry data.
Related papers
- Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Precision psychiatry: predicting predictability [0.0]
I review ten challenges in the field of precision psychiatry.
Need for studies on real-world populations and realistic clinical outcome definitions.
Consider treatment-related factors such as placebo effects and non-adherence to prescriptions.
arXiv Detail & Related papers (2023-06-21T13:10:46Z) - Fairness in Machine Learning meets with Equity in Healthcare [6.842248432925292]
This study proposes an artificial intelligence framework for identifying and mitigating biases in data and models.
A case study is presented to demonstrate how systematic biases in data can lead to amplified biases in model predictions.
Future research aims to test and validate the proposed ML framework in real-world clinical settings to evaluate its impact on promoting health equity.
arXiv Detail & Related papers (2023-05-11T14:25:34Z) - Algorithmic Bias, Generalist Models,and Clinical Medicine [1.9143819780453073]
The dominant paradigm in clinical machine learning is narrow in the sense that models are trained on biomedical datasets for particular clinical tasks.
The emerging paradigm is generalist in the sense that general-purpose language models such as Google's BERT and PaLM are increasingly being adapted for clinical use cases.
Many of these next-generation models provide substantial performance gains over prior clinical models, but at the same time introduce novel kinds of algorithmic bias.
This paper articulates how and in what respects biases in generalist models differ from biases in prior clinical models, and draws out practical recommendations for algorithmic bias mitigation.
arXiv Detail & Related papers (2023-05-06T10:48:51Z) - Bias Reducing Multitask Learning on Mental Health Prediction [18.32551434711739]
There has been an increase in research in developing machine learning models for mental health detection or prediction.
In this work, we aim to perform a fairness analysis and implement a multi-task learning based bias mitigation method on anxiety prediction models.
Our analysis showed that our anxiety prediction base model introduced some bias with regards to age, income, ethnicity, and whether a participant is born in the U.S. or not.
arXiv Detail & Related papers (2022-08-07T02:28:32Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - A Machine Learning Model for Predicting, Diagnosing, and Mitigating
Health Disparities in Hospital Readmission [0.0]
We propose a machine learning pipeline capable of making predictions as well as detecting and mitigating biases in the data and model predictions.
We evaluate the performance of the proposed method on a clinical dataset using accuracy and fairness measures.
arXiv Detail & Related papers (2022-06-13T16:07:25Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - What Do You See in this Patient? Behavioral Testing of Clinical NLP
Models [69.09570726777817]
We introduce an extendable testing framework that evaluates the behavior of clinical outcome models regarding changes of the input.
We show that model behavior varies drastically even when fine-tuned on the same data and that allegedly best-performing models have not always learned the most medically plausible patterns.
arXiv Detail & Related papers (2021-11-30T15:52:04Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Hemogram Data as a Tool for Decision-making in COVID-19 Management:
Applications to Resource Scarcity Scenarios [62.997667081978825]
COVID-19 pandemics has challenged emergency response systems worldwide, with widespread reports of essential services breakdown and collapse of health care structure.
This work describes a machine learning model derived from hemogram exam data performed in symptomatic patients.
Proposed models can predict COVID-19 qRT-PCR results in symptomatic individuals with high accuracy, sensitivity and specificity.
arXiv Detail & Related papers (2020-05-10T01:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.