Evaluating subgroup disparity using epistemic uncertainty in mammography
- URL: http://arxiv.org/abs/2107.02716v1
- Date: Tue, 6 Jul 2021 16:36:48 GMT
- Title: Evaluating subgroup disparity using epistemic uncertainty in mammography
- Authors: Charles Lu, Andreanne Lemay, Katharina Hoebel, Jayashree
Kalpathy-Cramer
- Abstract summary: We explore how uncertainty can be used to evaluate disparity in patient demographics (race) and data acquisition subgroups for breast density assessment.
Our results show that even if aggregate performance is comparable, the choice of uncertainty metric quantification can significantly the subgroup level.
We hope this analysis can promote further work on how uncertainty can be leveraged to increase transparency of machine learning applications for clinical deployment.
- Score: 3.045076250501715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning (ML) continue to be integrated into healthcare systems
that affect clinical decision making, new strategies will need to be
incorporated in order to effectively detect and evaluate subgroup disparities
to ensure accountability and generalizability in clinical workflows. In this
paper, we explore how epistemic uncertainty can be used to evaluate disparity
in patient demographics (race) and data acquisition (scanner) subgroups for
breast density assessment on a dataset of 108,190 mammograms collected from 33
clinical sites. Our results show that even if aggregate performance is
comparable, the choice of uncertainty quantification metric can significantly
the subgroup level. We hope this analysis can promote further work on how
uncertainty can be leveraged to increase transparency of machine learning
applications for clinical deployment.
Related papers
- Federated unsupervised random forest for privacy-preserving patient
stratification [0.4499833362998487]
We introduce a novel multi-omics clustering approach utilizing unsupervised random-forests.
We have validated our approach on machine learning benchmark data sets and on cancer data from The Cancer Genome Atlas.
Our method is competitive with the state-of-the-art in terms of disease subtyping, but at the same time substantially improves the cluster interpretability.
arXiv Detail & Related papers (2024-01-29T12:04:14Z) - Evaluating the Fairness of the MIMIC-IV Dataset and a Baseline
Algorithm: Application to the ICU Length of Stay Prediction [65.268245109828]
This paper uses the MIMIC-IV dataset to examine the fairness and bias in an XGBoost binary classification model predicting the ICU length of stay.
The research reveals class imbalances in the dataset across demographic attributes and employs data preprocessing and feature extraction.
The paper concludes with recommendations for fairness-aware machine learning techniques for mitigating biases and the need for collaborative efforts among healthcare professionals and data scientists.
arXiv Detail & Related papers (2023-12-31T16:01:48Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Auditing ICU Readmission Rates in an Clinical Database: An Analysis of
Risk Factors and Clinical Outcomes [0.0]
This study presents a machine learning pipeline for clinical data classification in the context of a 30-day readmission problem.
The fairness audit uncovers disparities in equal opportunity, predictive parity, false positive rate parity, and false negative rate parity criteria.
The study suggests the need for collaborative efforts among researchers, policymakers, and practitioners to address bias and fairness in artificial intelligence (AI) systems.
arXiv Detail & Related papers (2023-04-12T17:09:38Z) - Clinical trial site matching with improved diversity using fair policy
learning [56.01170456417214]
We learn a model that maps a clinical trial description to a ranked list of potential trial sites.
Unlike existing fairness frameworks, the group membership of each trial site is non-binary.
We propose fairness criteria based on demographic parity to address such a multi-group membership scenario.
arXiv Detail & Related papers (2022-04-13T16:35:28Z) - Distribution-Free Federated Learning with Conformal Predictions [0.0]
Federated learning aims to leverage separate institutional datasets while maintaining patient privacy.
Poor calibration and lack of interpretability may hamper widespread deployment of federated models into clinical practice.
We propose to address these challenges by incorporating an adaptive conformal framework into federated learning.
arXiv Detail & Related papers (2021-10-14T18:41:17Z) - Fair Conformal Predictors for Applications in Medical Imaging [4.236384785644418]
Conformal methods can complement deep learning models by providing both clinically intuitive way of expressing model uncertainty.
We conduct experiments with a mammographic breast density and dermatology photography datasets to demonstrate the utility of conformal predictions.
We find that conformal predictors can be used to equalize coverage with respect to patient demographics such as race and skin tone.
arXiv Detail & Related papers (2021-09-09T16:31:10Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - An Empirical Characterization of Fair Machine Learning For Clinical Risk
Prediction [7.945729033499554]
The use of machine learning to guide clinical decision making has the potential to worsen existing health disparities.
Several recent works frame the problem as that of algorithmic fairness, a framework that has attracted considerable attention and criticism.
We conduct an empirical study to characterize the impact of penalizing group fairness violations on an array of measures of model performance and group fairness.
arXiv Detail & Related papers (2020-07-20T17:46:31Z) - Predictive Modeling of ICU Healthcare-Associated Infections from
Imbalanced Data. Using Ensembles and a Clustering-Based Undersampling
Approach [55.41644538483948]
This work is focused on both the identification of risk factors and the prediction of healthcare-associated infections in intensive-care units.
The aim is to support decision making addressed at reducing the incidence rate of infections.
arXiv Detail & Related papers (2020-05-07T16:13:12Z) - Contextual Constrained Learning for Dose-Finding Clinical Trials [102.8283665750281]
C3T-Budget is a contextual constrained clinical trial algorithm for dose-finding under both budget and safety constraints.
It recruits patients with consideration of the remaining budget, the remaining time, and the characteristics of each group.
arXiv Detail & Related papers (2020-01-08T11:46:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.