Fair Machine Learning in Healthcare: A Review
- URL: http://arxiv.org/abs/2206.14397v3
- Date: Thu, 1 Feb 2024 05:03:56 GMT
- Title: Fair Machine Learning in Healthcare: A Review
- Authors: Qizhang Feng, Mengnan Du, Na Zou, Xia Hu
- Abstract summary: We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
- Score: 90.22219142430146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The digitization of healthcare data coupled with advances in computational
capabilities has propelled the adoption of machine learning (ML) in healthcare.
However, these methods can perpetuate or even exacerbate existing disparities,
leading to fairness concerns such as the unequal distribution of resources and
diagnostic inaccuracies among different demographic groups. Addressing these
fairness problem is paramount to prevent further entrenchment of social
injustices. In this survey, we analyze the intersection of fairness in machine
learning and healthcare disparities. We adopt a framework based on the
principles of distributive justice to categorize fairness concerns into two
distinct classes: equal allocation and equal performance. We provide a critical
review of the associated fairness metrics from a machine learning standpoint
and examine biases and mitigation strategies across the stages of the ML
lifecycle, discussing the relationship between biases and their
countermeasures. The paper concludes with a discussion on the pressing
challenges that remain unaddressed in ensuring fairness in healthcare ML, and
proposes several new research directions that hold promise for developing
ethical and equitable ML applications in healthcare.
Related papers
- A tutorial on fairness in machine learning in healthcare [0.6311610943467981]
This tutorial is designed to introduce the medical informatics community to the common notions of fairness within machine learning.
We describe the fundamental concepts and methods used to define fairness in ML, including an overview of why models in healthcare may be unfair.
We provide a user-friendly R package for comprehensive group fairness evaluation, enabling researchers and clinicians to assess fairness in their own ML work.
arXiv Detail & Related papers (2024-06-13T16:41:30Z) - Error Parity Fairness: Testing for Group Fairness in Regression Tasks [5.076419064097733]
This work presents error parity as a regression fairness notion and introduces a testing methodology to assess group fairness.
It is followed by a suitable permutation test to compare groups on several statistics to explore disparities and identify impacted groups.
Overall, the proposed regression fairness testing methodology fills a gap in the fair machine learning literature and may serve as a part of larger accountability assessments and algorithm audits.
arXiv Detail & Related papers (2022-08-16T17:47:20Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Algorithm Fairness in AI for Medicine and Healthcare [4.626801344708786]
algorithm fairness is a challenging problem in delivering equitable care.
Recent evaluation of AI models stratified across race sub-populations have revealed enormous inequalities in how patients are diagnosed, given treatments, and billed for healthcare costs.
arXiv Detail & Related papers (2021-10-01T18:18:13Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Assessing Fairness in Classification Parity of Machine Learning Models
in Healthcare [19.33981403623531]
We present preliminary results on fairness in the context of classification parity in healthcare.
We also present some exploratory methods to improve fairness and choosing appropriate classification algorithms in the context of healthcare.
arXiv Detail & Related papers (2021-02-07T04:46:27Z) - An Empirical Characterization of Fair Machine Learning For Clinical Risk
Prediction [7.945729033499554]
The use of machine learning to guide clinical decision making has the potential to worsen existing health disparities.
Several recent works frame the problem as that of algorithmic fairness, a framework that has attracted considerable attention and criticism.
We conduct an empirical study to characterize the impact of penalizing group fairness violations on an array of measures of model performance and group fairness.
arXiv Detail & Related papers (2020-07-20T17:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.