Connecting Fairness in Machine Learning with Public Health Equity
- URL: http://arxiv.org/abs/2304.04761v1
- Date: Sat, 8 Apr 2023 10:21:49 GMT
- Title: Connecting Fairness in Machine Learning with Public Health Equity
- Authors: Shaina Raza
- Abstract summary: biases in data and model design can result in disparities for certain protected groups and amplify existing inequalities in healthcare.
This study summarizes seminal literature on ML fairness and presents a framework for identifying and mitigating biases in the data and model.
Case studies suggest how the framework can be used to prevent these biases and highlight the need for fair and equitable ML models in public health.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) has become a critical tool in public health, offering
the potential to improve population health, diagnosis, treatment selection, and
health system efficiency. However, biases in data and model design can result
in disparities for certain protected groups and amplify existing inequalities
in healthcare. To address this challenge, this study summarizes seminal
literature on ML fairness and presents a framework for identifying and
mitigating biases in the data and model. The framework provides guidance on
incorporating fairness into different stages of the typical ML pipeline, such
as data processing, model design, deployment, and evaluation. To illustrate the
impact of biases in data on ML models, we present examples that demonstrate how
systematic biases can be amplified through model predictions. These case
studies suggest how the framework can be used to prevent these biases and
highlight the need for fair and equitable ML models in public health. This work
aims to inform and guide the use of ML in public health towards a more ethical
and equitable outcome for all populations.
Related papers
- Exploring Bias and Prediction Metrics to Characterise the Fairness of Machine Learning for Equity-Centered Public Health Decision-Making: A Narrative Review [2.7757900645956943]
There is a lack of comprehensive understanding of algorithmic bias, systematic errors in predicted population health outcomes, resulting from the public health application of Machine Learning.
The review will help formalize the evaluation framework for ML on public health from an equity perspective.
arXiv Detail & Related papers (2024-08-23T14:47:10Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Cross-Care: Assessing the Healthcare Implications of Pre-training Data on Language Model Bias [3.455189439319919]
We introduce Cross-Care, the first benchmark framework dedicated to assessing biases and real world knowledge in large language models (LLMs)
We evaluate how demographic biases embedded in pre-training corpora like $ThePile$ influence the outputs of LLMs.
Our results highlight substantial misalignment between LLM representation of disease prevalence and real disease prevalence rates across demographic subgroups.
arXiv Detail & Related papers (2024-05-09T02:33:14Z) - Survey of Social Bias in Vision-Language Models [65.44579542312489]
Survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL.
The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models.
arXiv Detail & Related papers (2023-09-24T15:34:56Z) - Improving Fairness in AI Models on Electronic Health Records: The Case
for Federated Learning Methods [0.0]
We show one possible approach to mitigate bias concerns by having healthcare institutions collaborate through a federated learning paradigm.
We propose a comprehensive FL approach with adversarial debiasing and a fair aggregation method, suitable to various fairness metrics.
Our method has achieved promising fairness performance with the lowest impact on overall discrimination performance (accuracy)
arXiv Detail & Related papers (2023-05-19T02:03:49Z) - Fairness in Machine Learning meets with Equity in Healthcare [6.842248432925292]
This study proposes an artificial intelligence framework for identifying and mitigating biases in data and models.
A case study is presented to demonstrate how systematic biases in data can lead to amplified biases in model predictions.
Future research aims to test and validate the proposed ML framework in real-world clinical settings to evaluate its impact on promoting health equity.
arXiv Detail & Related papers (2023-05-11T14:25:34Z) - Auditing Algorithmic Fairness in Machine Learning for Health with
Severity-Based LOGAN [70.76142503046782]
We propose supplementing machine learning-based (ML) healthcare tools for bias with SLOGAN, an automatic tool for capturing local biases in a clinical prediction task.
LOGAN adapts an existing tool, LOcal Group biAs detectioN, by contextualizing group bias detection in patient illness severity and past medical history.
On average, SLOGAN identifies larger fairness disparities in over 75% of patient groups than LOGAN while maintaining clustering quality.
arXiv Detail & Related papers (2022-11-16T08:04:12Z) - Fairness and bias correction in machine learning for depression
prediction: results from four study populations [3.3136009643108038]
We present a systematic study of bias in machine learning models designed to predict depression.
We find that standard ML approaches show regularly biased behaviors.
We also show that mitigation techniques, both standard and our own post-hoc method, can be effective in reducing the level of unfair bias.
arXiv Detail & Related papers (2022-11-10T03:53:17Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.