Explainable Deep Learning in Healthcare: A Methodological Survey from an
Attribution View
- URL: http://arxiv.org/abs/2112.02625v1
- Date: Sun, 5 Dec 2021 17:12:53 GMT
- Title: Explainable Deep Learning in Healthcare: A Methodological Survey from an
Attribution View
- Authors: Di Jin and Elena Sergeeva and Wei-Hung Weng and Geeticka Chauhan and
Peter Szolovits
- Abstract summary: We introduce the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners.
We discuss how these methods have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies.
- Score: 36.025217954247125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing availability of large collections of electronic health record
(EHR) data and unprecedented technical advances in deep learning (DL) have
sparked a surge of research interest in developing DL based clinical decision
support systems for diagnosis, prognosis, and treatment. Despite the
recognition of the value of deep learning in healthcare, impediments to further
adoption in real healthcare settings remain due to the black-box nature of DL.
Therefore, there is an emerging need for interpretable DL, which allows end
users to evaluate the model decision making to know whether to accept or reject
predictions and recommendations before an action is taken. In this review, we
focus on the interpretability of the DL models in healthcare. We start by
introducing the methods for interpretability in depth and comprehensively as a
methodological reference for future researchers or clinical practitioners in
this field. Besides the methods' details, we also include a discussion of
advantages and disadvantages of these methods and which scenarios each of them
is suitable for, so that interested readers can know how to compare and choose
among them for use. Moreover, we discuss how these methods, originally
developed for solving general-domain problems, have been adapted and applied to
healthcare problems and how they can help physicians better understand these
data-driven technologies. Overall, we hope this survey can help researchers and
practitioners in both artificial intelligence (AI) and clinical fields
understand what methods we have for enhancing the interpretability of their DL
models and choose the optimal one accordingly.
Related papers
- Navigating Distribution Shifts in Medical Image Analysis: A Survey [23.012651270865707]
This paper systematically reviews approaches that apply deep learning techniques to MedIA systems affected by distribution shifts.
We categorize the existing body of work into Joint Training, Federated Learning, Fine-tuning, and Domain Generalization.
By delving deeper into these topics, we highlight potential pathways for future research.
arXiv Detail & Related papers (2024-11-05T08:01:16Z) - Exploring LLM-based Data Annotation Strategies for Medical Dialogue Preference Alignment [22.983780823136925]
This research examines the use of Reinforcement Learning from AI Feedback (RLAIF) techniques to improve healthcare dialogue models.
We argue that the primary challenges in current RLAIF research for healthcare are the limitations of automated evaluation methods.
We present a new evaluation framework based on standardized patient examinations.
arXiv Detail & Related papers (2024-10-05T10:29:19Z) - A Survey of Models for Cognitive Diagnosis: New Developments and Future Directions [66.40362209055023]
This paper aims to provide a survey of current models for cognitive diagnosis, with more attention on new developments using machine learning-based methods.
By comparing the model structures, parameter estimation algorithms, model evaluation methods and applications, we provide a relatively comprehensive review of the recent trends in cognitive diagnosis models.
arXiv Detail & Related papers (2024-07-07T18:02:00Z) - ProtoAL: Interpretable Deep Active Learning with prototypes for medical imaging [0.6292138336765966]
We propose the ProtoAL method, where we integrate an interpretable DL model into the Deep Active Learning framework.
We evaluated ProtoAL on the Messidor dataset, achieving an area under the precision-recall curve of 0.79 while utilizing only 76.54% of the available labeled data.
arXiv Detail & Related papers (2024-04-06T21:39:49Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - Beyond Known Reality: Exploiting Counterfactual Explanations for Medical
Research [1.6574413179773761]
Our study uses counterfactual explanations to explore the applicability of "what if?" scenarios in medical research.
Our aim is to expand our understanding of magnetic resonance imaging (MRI) features used for diagnosing pediatric posterior fossa brain tumors.
arXiv Detail & Related papers (2023-07-05T09:14:09Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Semi-Supervised Variational Reasoning for Medical Dialogue Generation [70.838542865384]
Two key characteristics are relevant for medical dialogue generation: patient states and physician actions.
We propose an end-to-end variational reasoning approach to medical dialogue generation.
A physician policy network composed of an action-classifier and two reasoning detectors is proposed for augmented reasoning ability.
arXiv Detail & Related papers (2021-05-13T04:14:35Z) - Achievements and Challenges in Explaining Deep Learning based
Computer-Aided Diagnosis Systems [4.9449660544238085]
We discuss early achievements in development of explainable AI for validation of known disease criteria.
We highlight some of the remaining challenges that stand in the way of practical applications of AI as a clinical decision support tool.
arXiv Detail & Related papers (2020-11-26T08:08:19Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.