Improving Fairness in AI Models on Electronic Health Records: The Case
for Federated Learning Methods
- URL: http://arxiv.org/abs/2305.11386v1
- Date: Fri, 19 May 2023 02:03:49 GMT
- Title: Improving Fairness in AI Models on Electronic Health Records: The Case
for Federated Learning Methods
- Authors: Raphael Poulain, Mirza Farhan Bin Tarek and Rahmatollah Beheshti
- Abstract summary: We show one possible approach to mitigate bias concerns by having healthcare institutions collaborate through a federated learning paradigm.
We propose a comprehensive FL approach with adversarial debiasing and a fair aggregation method, suitable to various fairness metrics.
Our method has achieved promising fairness performance with the lowest impact on overall discrimination performance (accuracy)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing AI tools that preserve fairness is of critical importance,
specifically in high-stakes applications such as those in healthcare. However,
health AI models' overall prediction performance is often prioritized over the
possible biases such models could have. In this study, we show one possible
approach to mitigate bias concerns by having healthcare institutions
collaborate through a federated learning paradigm (FL; which is a popular
choice in healthcare settings). While FL methods with an emphasis on fairness
have been previously proposed, their underlying model and local implementation
techniques, as well as their possible applications to the healthcare domain
remain widely underinvestigated. Therefore, we propose a comprehensive FL
approach with adversarial debiasing and a fair aggregation method, suitable to
various fairness metrics, in the healthcare domain where electronic health
records are used. Not only our approach explicitly mitigates bias as part of
the optimization process, but an FL-based paradigm would also implicitly help
with addressing data imbalance and increasing the data size, offering a
practical solution for healthcare applications. We empirically demonstrate our
method's superior performance on multiple experiments simulating large-scale
real-world scenarios and compare it to several baselines. Our method has
achieved promising fairness performance with the lowest impact on overall
discrimination performance (accuracy).
Related papers
- A New Perspective to Boost Performance Fairness for Medical Federated Learning [37.48845838838735]
We propose Fed-LWR to improve performance fairness from the perspective of feature shift.
Specifically, we dynamically perceive the bias of the global model across all hospitals by estimating the layer-wise difference in feature representations.
We evaluate our method on two widely used federated medical image segmentation benchmarks.
arXiv Detail & Related papers (2024-10-12T17:19:46Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - FairFML: Fair Federated Machine Learning with a Case Study on Reducing Gender Disparities in Cardiac Arrest Outcome Prediction [10.016644624468762]
We present Fair Federated Machine Learning (FairFML), a model-agnostic solution designed to reduce algorithmic bias in cross-institutional healthcare collaborations.
As a proof of concept, we validated FairFML using a real-world clinical case study focused on reducing gender disparities in cardiac arrest outcome prediction.
Our findings show that FairFML improves model fairness by up to 65% compared to the centralized model, while maintaining performance comparable to both local and centralized models.
arXiv Detail & Related papers (2024-10-07T13:02:04Z) - Aligning (Medical) LLMs for (Counterfactual) Fairness [2.089191490381739]
Large Language Models (LLMs) have emerged as promising solutions for medical and clinical decision support applications.
LLMs are subject to different types of biases, which can lead to unfair treatment of individuals, worsening health disparities, and reducing trust in AI-augmented medical tools.
We present a new model alignment approach for aligning LLMs using a preference optimization method within a knowledge distillation framework.
arXiv Detail & Related papers (2024-08-22T01:11:27Z) - Policy Optimization for Personalized Interventions in Behavioral Health [8.10897203067601]
Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes.
We study the problem of optimizing personalized interventions for patients to maximize a long-term outcome.
We present a new approach for this problem that we dub DecompPI, which decomposes the state space for a system of patients to the individual level.
arXiv Detail & Related papers (2023-03-21T21:42:03Z) - Decentralized Distributed Learning with Privacy-Preserving Data
Synthesis [9.276097219140073]
In the medical field, multi-center collaborations are often sought to yield more generalizable findings by leveraging the heterogeneity of patient and clinical data.
Recent privacy regulations hinder the possibility to share data, and consequently, to come up with machine learning-based solutions that support diagnosis and prognosis.
We present a decentralized distributed method that integrates features from local nodes, providing models able to generalize across multiple datasets while maintaining privacy.
arXiv Detail & Related papers (2022-06-20T23:49:38Z) - Federated Offline Reinforcement Learning [55.326673977320574]
We propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites.
We design the first federated policy optimization algorithm for offline RL with sample complexity.
We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed.
arXiv Detail & Related papers (2022-06-11T18:03:26Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.