What is Fair? Defining Fairness in Machine Learning for Health
- URL: http://arxiv.org/abs/2406.09307v4
- Date: Thu, 14 Nov 2024 18:45:35 GMT
- Title: What is Fair? Defining Fairness in Machine Learning for Health
- Authors: Jianhui Gao, Benson Chou, Zachary R. McCaw, Hilary Thurston, Paul Varghese, Chuan Hong, Jessica Gronsbell,
- Abstract summary: This review examines notions of fairness used in machine learning for health.
We provide an overview of commonly used fairness metrics and supplement our discussion with a case-study of an openly available electronic health record dataset.
We also discuss the outlook for future research, highlighting current challenges and opportunities in defining fairness in health.
- Score: 0.6311610943467981
- License:
- Abstract: Ensuring that machine learning (ML) models are safe, effective, and equitable across all patient groups is essential for clinical decision-making and for preventing the reinforcement of existing health disparities. This review examines notions of fairness used in ML for health, including a review of why ML models can be unfair and how fairness has been quantified in a wide range of real-world examples. We provide an overview of commonly used fairness metrics and supplement our discussion with a case-study of an openly available electronic health record (EHR) dataset. We also discuss the outlook for future research, highlighting current challenges and opportunities in defining fairness in health.
Related papers
- FairFML: Fair Federated Machine Learning with a Case Study on Reducing Gender Disparities in Cardiac Arrest Outcome Prediction [10.016644624468762]
We present Fair Federated Machine Learning (FairFML), a model-agnostic solution designed to reduce algorithmic bias in cross-institutional healthcare collaborations.
As a proof of concept, we validated FairFML using a real-world clinical case study focused on reducing gender disparities in cardiac arrest outcome prediction.
Our findings show that FairFML improves model fairness by up to 65% compared to the centralized model, while maintaining performance comparable to both local and centralized models.
arXiv Detail & Related papers (2024-10-07T13:02:04Z) - Fairness in Machine Learning meets with Equity in Healthcare [6.842248432925292]
This study proposes an artificial intelligence framework for identifying and mitigating biases in data and models.
A case study is presented to demonstrate how systematic biases in data can lead to amplified biases in model predictions.
Future research aims to test and validate the proposed ML framework in real-world clinical settings to evaluate its impact on promoting health equity.
arXiv Detail & Related papers (2023-05-11T14:25:34Z) - Connecting Fairness in Machine Learning with Public Health Equity [0.0]
biases in data and model design can result in disparities for certain protected groups and amplify existing inequalities in healthcare.
This study summarizes seminal literature on ML fairness and presents a framework for identifying and mitigating biases in the data and model.
Case studies suggest how the framework can be used to prevent these biases and highlight the need for fair and equitable ML models in public health.
arXiv Detail & Related papers (2023-04-08T10:21:49Z) - Globalizing Fairness Attributes in Machine Learning: A Case Study on
Health in Africa [8.566023181495929]
Fairness has implications for global health in Africa, which already has inequitable power imbalances between the Global North and South.
We propose fairness attributes for consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities.
arXiv Detail & Related papers (2023-04-05T02:10:53Z) - Privacy-preserving machine learning for healthcare: open challenges and
future perspectives [72.43506759789861]
We conduct a review of recent literature concerning Privacy-Preserving Machine Learning (PPML) for healthcare.
We primarily focus on privacy-preserving training and inference-as-a-service.
The aim of this review is to guide the development of private and efficient ML models in healthcare.
arXiv Detail & Related papers (2023-03-27T19:20:51Z) - Auditing Algorithmic Fairness in Machine Learning for Health with
Severity-Based LOGAN [70.76142503046782]
We propose supplementing machine learning-based (ML) healthcare tools for bias with SLOGAN, an automatic tool for capturing local biases in a clinical prediction task.
LOGAN adapts an existing tool, LOcal Group biAs detectioN, by contextualizing group bias detection in patient illness severity and past medical history.
On average, SLOGAN identifies larger fairness disparities in over 75% of patient groups than LOGAN while maintaining clustering quality.
arXiv Detail & Related papers (2022-11-16T08:04:12Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Technical Challenges for Training Fair Neural Networks [62.466658247995404]
We conduct experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.
We observe that large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
arXiv Detail & Related papers (2021-02-12T20:36:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.