Fair Patient Model: Mitigating Bias in the Patient Representation
Learned from the Electronic Health Records
- URL: http://arxiv.org/abs/2306.03179v1
- Date: Mon, 5 Jun 2023 18:40:35 GMT
- Title: Fair Patient Model: Mitigating Bias in the Patient Representation
Learned from the Electronic Health Records
- Authors: Sonish Sivarajkumar, Yufei Huang, Yanshan Wang
- Abstract summary: We applied the proposed model, called Fair Patient Model (FPM), to a sample of 34,739 patients from the MIMIC-III dataset.
FPM outperformed the baseline models in terms of three fairness metrics: demographic parity, equality of opportunity difference, and equalized odds ratio.
- Score: 7.467693938220289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: To pre-train fair and unbiased patient representations from
Electronic Health Records (EHRs) using a novel weighted loss function that
reduces bias and improves fairness in deep representation learning models.
Methods: We defined a new loss function, called weighted loss function, in
the deep representation learning model to balance the importance of different
groups of patients and features. We applied the proposed model, called Fair
Patient Model (FPM), to a sample of 34,739 patients from the MIMIC-III dataset
and learned patient representations for four clinical outcome prediction tasks.
Results: FPM outperformed the baseline models in terms of three fairness
metrics: demographic parity, equality of opportunity difference, and equalized
odds ratio. FPM also achieved comparable predictive performance with the
baselines, with an average accuracy of 0.7912. Feature analysis revealed that
FPM captured more information from clinical features than the baselines.
Conclusion: FPM is a novel method to pre-train fair and unbiased patient
representations from EHR data using a weighted loss function. The learned
representations can be used for various downstream tasks in healthcare and can
be extended to other domains where bias and fairness are important.
Related papers
- Debias-CLR: A Contrastive Learning Based Debiasing Method for Algorithmic Fairness in Healthcare Applications [0.17624347338410748]
We proposed an implicit in-processing debiasing method to combat disparate treatment.
We used clinical notes of heart failure patients and used diagnostic codes, procedure reports and physiological vitals of the patients.
We found that Debias-CLR was able to reduce the Single-Category Word Embedding Association Test (SC-WEAT) effect size score when debiasing for gender and ethnicity.
arXiv Detail & Related papers (2024-11-15T19:32:01Z) - Machine Learning for ALSFRS-R Score Prediction: Making Sense of the Sensor Data [44.99833362998488]
Amyotrophic Lateral Sclerosis (ALS) is a rapidly progressive neurodegenerative disease that presents individuals with limited treatment options.
The present investigation, spearheaded by the iDPP@CLEF 2024 challenge, focuses on utilizing sensor-derived data obtained through an app.
arXiv Detail & Related papers (2024-07-10T19:17:23Z) - FairEHR-CLP: Towards Fairness-Aware Clinical Predictions with Contrastive Learning in Multimodal Electronic Health Records [15.407593899656762]
We present FairEHR-CLP: a framework for fairness-aware Clinical Predictions with Contrastive Learning in EHRs.
FairEHR-CLP operates through a two-stage process, utilizing patient demographics, longitudinal data, and clinical notes.
We introduce a novel fairness metric to effectively measure error rate disparities across subgroups.
arXiv Detail & Related papers (2024-02-01T19:24:45Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - MCRAGE: Synthetic Healthcare Data for Fairness [3.0089659534785853]
We propose Minority Class Rebalancing through Augmentation by Generative modeling (MCRAGE) to augment imbalanced datasets.
MCRAGE involves training a Denoising Diffusion Probabilistic Model (CDDPM) capable of generating high-quality synthetic EHR samples from underrepresented classes.
We use this synthetic data to augment the existing imbalanced dataset, resulting in a more balanced distribution across all classes.
arXiv Detail & Related papers (2023-10-27T19:02:22Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - Evaluating the Fairness of Deep Learning Uncertainty Estimates in
Medical Image Analysis [3.5536769591744557]
Deep learning (DL) models have shown great success in many medical image analysis tasks.
However, deployment of the resulting models into real clinical contexts requires robustness and fairness across different sub-populations.
Recent studies have shown significant biases in DL models across demographic subgroups, indicating a lack of fairness in the models.
arXiv Detail & Related papers (2023-03-06T16:01:30Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Adversarial Sample Enhanced Domain Adaptation: A Case Study on
Predictive Modeling with Electronic Health Records [57.75125067744978]
We propose a data augmentation method to facilitate domain adaptation.
adversarially generated samples are used during domain adaptation.
Results confirm the effectiveness of our method and the generality on different tasks.
arXiv Detail & Related papers (2021-01-13T03:20:20Z) - Hemogram Data as a Tool for Decision-making in COVID-19 Management:
Applications to Resource Scarcity Scenarios [62.997667081978825]
COVID-19 pandemics has challenged emergency response systems worldwide, with widespread reports of essential services breakdown and collapse of health care structure.
This work describes a machine learning model derived from hemogram exam data performed in symptomatic patients.
Proposed models can predict COVID-19 qRT-PCR results in symptomatic individuals with high accuracy, sensitivity and specificity.
arXiv Detail & Related papers (2020-05-10T01:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.