Fair Conformal Predictors for Applications in Medical Imaging
- URL: http://arxiv.org/abs/2109.04392v1
- Date: Thu, 9 Sep 2021 16:31:10 GMT
- Title: Fair Conformal Predictors for Applications in Medical Imaging
- Authors: Charles Lu, Andreanne Lemay, Ken Chang, Katharina Hoebel, Jayashree
Kalpathy-Cramer
- Abstract summary: Conformal methods can complement deep learning models by providing both clinically intuitive way of expressing model uncertainty.
We conduct experiments with a mammographic breast density and dermatology photography datasets to demonstrate the utility of conformal predictions.
We find that conformal predictors can be used to equalize coverage with respect to patient demographics such as race and skin tone.
- Score: 4.236384785644418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has the potential to augment many components of the clinical
workflow, such as medical image interpretation. However, the translation of
these black box algorithms into clinical practice has been marred by the
relative lack of transparency compared to conventional machine learning
methods, hindering in clinician trust in the systems for critical medical
decision-making. Specifically, common deep learning approaches do not have
intuitive ways of expressing uncertainty with respect to cases that might
require further human review. Furthermore, the possibility of algorithmic bias
has caused hesitancy regarding the use of developed algorithms in clinical
settings. To these ends, we explore how conformal methods can complement deep
learning models by providing both clinically intuitive way (by means of
confidence prediction sets) of expressing model uncertainty as well as
facilitating model transparency in clinical workflows. In this paper, we
conduct a field survey with clinicians to assess clinical use-cases of
conformal predictions. Next, we conduct experiments with a mammographic breast
density and dermatology photography datasets to demonstrate the utility of
conformal predictions in "rule-in" and "rule-out" disease scenarios. Further,
we show that conformal predictors can be used to equalize coverage with respect
to patient demographics such as race and skin tone. We find that a conformal
predictions to be a promising framework with potential to increase clinical
usability and transparency for better collaboration between deep learning
algorithms and clinicians.
Related papers
- Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
They be Trusted? [2.0089256058364358]
The absence of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms.
This study evaluates two popular XAI methods used for explaining predictive models in the healthcare context.
arXiv Detail & Related papers (2023-06-21T02:29:30Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Patient Aware Active Learning for Fine-Grained OCT Classification [12.89552245538411]
We propose a framework that incorporates clinical insights into the sample selection process of active learning.
Our medically interpretable active learning framework captures diverse disease manifestations from patients to improve performance of OCT classification.
arXiv Detail & Related papers (2022-06-23T05:47:51Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - (Un)fairness in Post-operative Complication Prediction Models [20.16366948502659]
We consider a real-life example of risk estimation before surgery and investigate the potential for bias or unfairness of a variety of algorithms.
Our approach creates transparent documentation of potential bias so that the users can apply the model carefully.
arXiv Detail & Related papers (2020-11-03T22:11:19Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - An Empirical Characterization of Fair Machine Learning For Clinical Risk
Prediction [7.945729033499554]
The use of machine learning to guide clinical decision making has the potential to worsen existing health disparities.
Several recent works frame the problem as that of algorithmic fairness, a framework that has attracted considerable attention and criticism.
We conduct an empirical study to characterize the impact of penalizing group fairness violations on an array of measures of model performance and group fairness.
arXiv Detail & Related papers (2020-07-20T17:46:31Z) - Uncertainty estimation for classification and risk prediction on medical
tabular data [0.0]
This work advances the understanding of uncertainty estimation for classification and risk prediction on medical data.
In a data-scarce field such as healthcare, the ability to measure the uncertainty of a model's prediction could potentially lead to improved effectiveness of decision support tools.
arXiv Detail & Related papers (2020-04-13T08:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.