Mitigating Health Disparities in EHR via Deconfounder
- URL: http://arxiv.org/abs/2210.15901v1
- Date: Fri, 28 Oct 2022 05:16:50 GMT
- Title: Mitigating Health Disparities in EHR via Deconfounder
- Authors: Zheng Liu, Xiaohan Li and Philip Yu
- Abstract summary: We propose a novel framework, Parity Medical Deconfounder (PriMeD), to deal with the disparity issue in healthcare datasets.
PriMeD adopts a Conditional Variational Autoencoder (CVAE) to learn latent factors (substitute confounders) for observational data.
- Score: 5.511343163506091
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Health disparities, or inequalities between different patient demographics,
are becoming crucial in medical decision-making, especially in Electronic
Health Record (EHR) predictive modeling. To ensure the fairness of sensitive
attributes, conventional studies mainly adopt calibration or re-weighting
methods to balance the performance on among different demographic groups.
However, we argue that these methods have some limitations. First, these
methods usually mean a trade-off between the model's performance and fairness.
Second, many methods completely attribute unfairness to the data collection
process, which lacks substantial evidence. In this paper, we provide an
empirical study to discover the possibility of using deconfounder to address
the disparity issue in healthcare. Our study can be summarized in two parts.
The first part is a pilot study demonstrating the exacerbation of disparity
when unobserved confounders exist. The second part proposed a novel framework,
Parity Medical Deconfounder (PriMeD), to deal with the disparity issue in
healthcare datasets. Inspired by the deconfounder theory, PriMeD adopts a
Conditional Variational Autoencoder (CVAE) to learn latent factors (substitute
confounders) for observational data, and extensive experiments are provided to
show its effectiveness.
Related papers
- FairEHR-CLP: Towards Fairness-Aware Clinical Predictions with Contrastive Learning in Multimodal Electronic Health Records [15.407593899656762]
We present FairEHR-CLP: a framework for fairness-aware Clinical Predictions with Contrastive Learning in EHRs.
FairEHR-CLP operates through a two-stage process, utilizing patient demographics, longitudinal data, and clinical notes.
We introduce a novel fairness metric to effectively measure error rate disparities across subgroups.
arXiv Detail & Related papers (2024-02-01T19:24:45Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - (Predictable) Performance Bias in Unsupervised Anomaly Detection [3.826262429926079]
Unsupervised anomaly detection (UAD) models promise to aid in the crucial first step of disease detection.
Our study quantified the disparate performance of UAD models against certain demographic subgroups.
arXiv Detail & Related papers (2023-09-25T14:57:43Z) - A Counterfactual Fair Model for Longitudinal Electronic Health Records
via Deconfounder [5.198621505969445]
We propose a novel model called Fair Longitudinal Medical Deconfounder (FLMD)
FLMD aims to achieve both fairness and accuracy in longitudinal Electronic Health Records (EHR) modeling.
We conducted comprehensive experiments on two real-world EHR datasets to demonstrate the effectiveness of FLMD.
arXiv Detail & Related papers (2023-08-22T22:43:20Z) - Unbiased Pain Assessment through Wearables and EHR Data: Multi-attribute
Fairness Loss-based CNN Approach [3.799109312082668]
We propose a Multi-attribute Fairness Loss (MAFL) based CNN model to account for any sensitive attributes included in the data.
We compare the proposed model with well-known existing mitigation procedures, and studies reveal that the implemented model performs favorably in contrast to state-of-the-art methods.
arXiv Detail & Related papers (2023-07-03T09:21:36Z) - MEDFAIR: Benchmarking Fairness for Medical Imaging [44.73351338165214]
MEDFAIR is a framework to benchmark the fairness of machine learning models for medical imaging.
We find that the under-studied issue of model selection criterion can have a significant impact on fairness outcomes.
We make recommendations for different medical application scenarios that require different ethical principles.
arXiv Detail & Related papers (2022-10-04T16:30:47Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Adversarial Sample Enhanced Domain Adaptation: A Case Study on
Predictive Modeling with Electronic Health Records [57.75125067744978]
We propose a data augmentation method to facilitate domain adaptation.
adversarially generated samples are used during domain adaptation.
Results confirm the effectiveness of our method and the generality on different tasks.
arXiv Detail & Related papers (2021-01-13T03:20:20Z) - Predictive Modeling of ICU Healthcare-Associated Infections from
Imbalanced Data. Using Ensembles and a Clustering-Based Undersampling
Approach [55.41644538483948]
This work is focused on both the identification of risk factors and the prediction of healthcare-associated infections in intensive-care units.
The aim is to support decision making addressed at reducing the incidence rate of infections.
arXiv Detail & Related papers (2020-05-07T16:13:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.