Explainable Artificial Intelligence for Human Decision-Support System in
Medical Domain
- URL: http://arxiv.org/abs/2105.02357v1
- Date: Wed, 5 May 2021 22:29:28 GMT
- Title: Explainable Artificial Intelligence for Human Decision-Support System in
Medical Domain
- Authors: Samanta Knapi\v{c}, Avleen Malhi, Rohit Salujaa, Kary Fr\"amling
- Abstract summary: Our aim was to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN)
The visual explanations were provided on in-vivo gastral images obtained from a Video capsule endoscopy (VCE)
We have found that, as hypothesized, the CIU explainable method performed better than both LIME and SHAP methods in terms of increasing support for human decision-making.
- Score: 1.1470070927586016
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the present paper we present the potential of Explainable Artificial
Intelligence methods for decision-support in medical image analysis scenarios.
With three types of explainable methods applied to the same medical image data
set our aim was to improve the comprehensibility of the decisions provided by
the Convolutional Neural Network (CNN). The visual explanations were provided
on in-vivo gastral images obtained from a Video capsule endoscopy (VCE), with
the goal of increasing the health professionals' trust in the black box
predictions. We implemented two post-hoc interpretable machine learning methods
LIME and SHAP and the alternative explanation approach CIU, centered on the
Contextual Value and Utility (CIU). The produced explanations were evaluated
using human evaluation. We conducted three user studies based on the
explanations provided by LIME, SHAP and CIU. Users from different non-medical
backgrounds carried out a series of tests in the web-based survey setting and
stated their experience and understanding of the given explanations. Three user
groups (n=20, 20, 20) with three distinct forms of explanations were
quantitatively analyzed. We have found that, as hypothesized, the CIU
explainable method performed better than both LIME and SHAP methods in terms of
increasing support for human decision-making as well as being more transparent
and thus understandable to users. Additionally, CIU outperformed LIME and SHAP
by generating explanations more rapidly. Our findings suggest that there are
notable differences in human decision-making between various explanation
support settings. In line with that, we present three potential explainable
methods that can with future improvements in implementation be generalized on
different medical data sets and can provide great decision-support for medical
experts.
Related papers
- GEMeX-ThinkVG: Towards Thinking with Visual Grounding in Medical VQA via Reinforcement Learning [50.94508930739623]
Medical visual question answering aims to support clinical decision-making by enabling models to answer natural language questions based on medical images.<n>Current methods still suffer from limited answer reliability and poor interpretability, impairing the ability of clinicians and patients to understand and trust model-generated answers.<n>This work first proposes a Thinking with Visual Grounding dataset wherein the answer generation is decomposed into intermediate reasoning steps.<n>We introduce a novel verifiable reward mechanism for reinforcement learning to guide post-training, improving the alignment between the model's reasoning process and its final answer.
arXiv Detail & Related papers (2025-06-22T08:09:58Z) - A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support [2.020765276735129]
The study aims to identify the most effective and useful explanations that enhance the diagnostic process.<n>Medical doctors filled out a survey to assess different types of explanations.
arXiv Detail & Related papers (2025-05-15T11:42:24Z) - Uncertainty-aware abstention in medical diagnosis based on medical texts [87.88110503208016]
This study addresses the critical issue of reliability for AI-assisted medical diagnosis.
We focus on the selection prediction approach that allows the diagnosis system to abstain from providing the decision if it is not confident in the diagnosis.
We introduce HUQ-2, a new state-of-the-art method for enhancing reliability in selective prediction tasks.
arXiv Detail & Related papers (2025-02-25T10:15:21Z) - Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting [43.110187812734864]
We evaluate three types of explanations: visual explanations (saliency maps), natural language explanations, and a combination of both modalities.
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
We also observe that the quality of explanations, that is, how much factually correct information they entail, and how much this aligns with AI correctness, significantly impacts the usefulness of the different explanation types.
arXiv Detail & Related papers (2024-10-16T06:43:02Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Beyond Known Reality: Exploiting Counterfactual Explanations for Medical
Research [1.6574413179773761]
Our study uses counterfactual explanations to explore the applicability of "what if?" scenarios in medical research.
Our aim is to expand our understanding of magnetic resonance imaging (MRI) features used for diagnosing pediatric posterior fossa brain tumors.
arXiv Detail & Related papers (2023-07-05T09:14:09Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Align, Reason and Learn: Enhancing Medical Vision-and-Language
Pre-training with Knowledge [68.90835997085557]
We propose a systematic and effective approach to enhance structured medical knowledge from three perspectives.
First, we align the representations of the vision encoder and the language encoder through knowledge.
Second, we inject knowledge into the multi-modal fusion model to enable the model to perform reasoning using knowledge as the supplementation of the input image and text.
Third, we guide the model to put emphasis on the most critical information in images and texts by designing knowledge-induced pretext tasks.
arXiv Detail & Related papers (2022-09-15T08:00:01Z) - ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis
of Skin Lesions [4.886872847478552]
ExAID (Explainable AI for Dermatology) is a novel framework for biomedical image analysis.
It provides multi-modal concept-based explanations consisting of easy-to-understand textual explanations.
It will be the basis for similar applications in other biomedical imaging fields.
arXiv Detail & Related papers (2022-01-04T17:11:28Z) - Explainable Deep Learning in Healthcare: A Methodological Survey from an
Attribution View [36.025217954247125]
We introduce the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners.
We discuss how these methods have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies.
arXiv Detail & Related papers (2021-12-05T17:12:53Z) - Transparency of Deep Neural Networks for Medical Image Analysis: A
Review of Interpretability Methods [3.3918638314432936]
Deep neural networks have shown same or better performance than clinicians in many tasks.
Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision making process.
There is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow.
arXiv Detail & Related papers (2021-11-01T01:42:26Z) - Semi-Supervised Variational Reasoning for Medical Dialogue Generation [70.838542865384]
Two key characteristics are relevant for medical dialogue generation: patient states and physician actions.
We propose an end-to-end variational reasoning approach to medical dialogue generation.
A physician policy network composed of an action-classifier and two reasoning detectors is proposed for augmented reasoning ability.
arXiv Detail & Related papers (2021-05-13T04:14:35Z) - Learning Binary Semantic Embedding for Histology Image Classification
and Retrieval [56.34863511025423]
We propose a novel method for Learning Binary Semantic Embedding (LBSE)
Based on the efficient and effective embedding, classification and retrieval are performed to provide interpretable computer-assisted diagnosis for histology images.
Experiments conducted on three benchmark datasets validate the superiority of LBSE under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:36:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.