FairFML: Fair Federated Machine Learning with a Case Study on Reducing Gender Disparities in Cardiac Arrest Outcome Prediction
- URL: http://arxiv.org/abs/2410.17269v1
- Date: Mon, 07 Oct 2024 13:02:04 GMT
- Title: FairFML: Fair Federated Machine Learning with a Case Study on Reducing Gender Disparities in Cardiac Arrest Outcome Prediction
- Authors: Siqi Li, Qiming Wu, Xin Li, Di Miao, Chuan Hong, Wenjun Gu, Yuqing Shang, Yohei Okada, Michael Hao Chen, Mengying Yan, Yilin Ning, Marcus Eng Hock Ong, Nan Liu,
- Abstract summary: We present Fair Federated Machine Learning (FairFML), a model-agnostic solution designed to reduce algorithmic bias in cross-institutional healthcare collaborations.
As a proof of concept, we validated FairFML using a real-world clinical case study focused on reducing gender disparities in cardiac arrest outcome prediction.
Our findings show that FairFML improves model fairness by up to 65% compared to the centralized model, while maintaining performance comparable to both local and centralized models.
- Score: 10.016644624468762
- License:
- Abstract: Objective: Mitigating algorithmic disparities is a critical challenge in healthcare research, where ensuring equity and fairness is paramount. While large-scale healthcare data exist across multiple institutions, cross-institutional collaborations often face privacy constraints, highlighting the need for privacy-preserving solutions that also promote fairness. Materials and Methods: In this study, we present Fair Federated Machine Learning (FairFML), a model-agnostic solution designed to reduce algorithmic bias in cross-institutional healthcare collaborations while preserving patient privacy. As a proof of concept, we validated FairFML using a real-world clinical case study focused on reducing gender disparities in cardiac arrest outcome prediction. Results: We demonstrate that the proposed FairFML framework enhances fairness in federated learning (FL) models without compromising predictive performance. Our findings show that FairFML improves model fairness by up to 65% compared to the centralized model, while maintaining performance comparable to both local and centralized models, as measured by receiver operating characteristic analysis. Discussion and Conclusion: FairFML offers a promising and flexible solution for FL collaborations, with its adaptability allowing seamless integration with various FL frameworks and models, from traditional statistical methods to deep learning techniques. This makes FairFML a robust approach for developing fairer FL models across diverse clinical and biomedical applications.
Related papers
- Fairness-Aware Interpretable Modeling (FAIM) for Trustworthy Machine
Learning in Healthcare [6.608905791768002]
We propose Fairness-Aware Interpretable Modeling (FAIM) to improve model fairness without compromising performance.
FAIM features an interactive interface to identify a "fairer" model from a set of high-performing models.
We show FAIM models not only exhibited satisfactory discriminatory performance but also significantly mitigated biases as measured by well-established fairness metrics.
arXiv Detail & Related papers (2024-03-08T11:51:00Z) - Multi-dimensional Fair Federated Learning [25.07463977553212]
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data.
Group fairness and client fairness are two dimensions of fairness that are important for FL.
We propose a method, called mFairFL, to achieve group fairness and client fairness simultaneously.
arXiv Detail & Related papers (2023-12-09T11:37:30Z) - Improving Fairness in AI Models on Electronic Health Records: The Case
for Federated Learning Methods [0.0]
We show one possible approach to mitigate bias concerns by having healthcare institutions collaborate through a federated learning paradigm.
We propose a comprehensive FL approach with adversarial debiasing and a fair aggregation method, suitable to various fairness metrics.
Our method has achieved promising fairness performance with the lowest impact on overall discrimination performance (accuracy)
arXiv Detail & Related papers (2023-05-19T02:03:49Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Label-Efficient Self-Supervised Federated Learning for Tackling Data
Heterogeneity in Medical Imaging [23.08596805950814]
We present a robust and label-efficient self-supervised FL framework for medical image analysis.
Specifically, we introduce a novel distributed self-supervised pre-training paradigm into the existing FL pipeline.
We show that our self-supervised FL algorithm generalizes well to out-of-distribution data and learns federated models more effectively in limited label scenarios.
arXiv Detail & Related papers (2022-05-17T18:33:43Z) - MS Lesion Segmentation: Revisiting Weighting Mechanisms for Federated
Learning [92.91544082745196]
Federated learning (FL) has been widely employed for medical image analysis.
FL's performance is limited for multiple sclerosis (MS) lesion segmentation tasks.
We propose the first FL MS lesion segmentation framework via two effective re-weighting mechanisms.
arXiv Detail & Related papers (2022-05-03T14:06:03Z) - Closing the Generalization Gap of Cross-silo Federated Medical Image
Segmentation [66.44449514373746]
Cross-silo federated learning (FL) has attracted much attention in medical imaging analysis with deep learning in recent years.
There can be a gap between the model trained from FL and one from centralized training.
We propose a novel training framework FedSM to avoid client issue and successfully close the drift gap.
arXiv Detail & Related papers (2022-03-18T19:50:07Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.