The Explanation Necessity for Healthcare AI
- URL: http://arxiv.org/abs/2406.00216v2
- Date: Fri, 28 Feb 2025 14:16:47 GMT
- Title: The Explanation Necessity for Healthcare AI
- Authors: Michail Mamalakis, Héloïse de Vareilles, Graham Murray, Pietro Lio, John Suckling,
- Abstract summary: We propose a novel categorization system comprising four classes of explanation necessity.<n>This framework provides a practical tool for researchers to determine the appropriate depth of explainability needed.
- Score: 3.8953842074141387
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explainability is a critical factor in enhancing the trustworthiness and acceptance of artificial intelligence (AI) in healthcare, where decisions directly impact patient outcomes. Despite advancements in AI interpretability, clear guidelines on when and to what extent explanations are required in medical applications remain lacking. We propose a novel categorization system comprising four classes of explanation necessity (self-explainable, semi-explainable, non-explainable, and new-patterns discovery), guiding the required level of explanation; whether local (patient or sample level), global (cohort or dataset level), or both. To support this system, we introduce a mathematical formulation that incorporates three key factors: (i) robustness of the evaluation protocol, (ii) variability of expert observations, and (iii) representation dimensionality of the application. This framework provides a practical tool for researchers to determine the appropriate depth of explainability needed, addressing the critical question: When does an AI medical application need to be explained, and at what level of detail?
Related papers
- CasiMedicos-Arg: A Medical Question Answering Dataset Annotated with Explanatory Argumentative Structures [19.242920846826895]
We present the first multilingual dataset for Medical Question Answering where correct and incorrect diagnoses for a clinical case are enriched with a natural language explanation written by doctors.
This dataset consists of 558 clinical cases in four languages (English, Spanish, French, Italian) with explanations, where we annotated 5021 claims, 2313 premises, 2431 support relations, and 1106 attack relations.
arXiv Detail & Related papers (2024-10-07T17:41:45Z) - Explainable AI: Definition and attributes of a good explanation for health AI [0.18846515534317265]
understanding how and why an AI system makes a recommendation may require complex explanations of its inner workings and reasoning processes.
To fully realize the potential of AI, it is critical to address two fundamental questions about explanations for safety-critical AI applications.
The research outputs include (1) a definition of what constitutes an explanation in health-AI and (2) a comprehensive list of attributes that characterize a good explanation in health-AI.
arXiv Detail & Related papers (2024-09-09T16:56:31Z) - Designing Interpretable ML System to Enhance Trust in Healthcare: A Systematic Review to Proposed Responsible Clinician-AI-Collaboration Framework [13.215318138576713]
The paper reviews interpretable AI processes, methods, applications, and the challenges of implementation in healthcare.
It aims to foster a comprehensive understanding of the crucial role of a robust interpretability approach in healthcare.
arXiv Detail & Related papers (2023-11-18T12:29:18Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Clairvoyance: A Pipeline Toolkit for Medical Time Series [95.22483029602921]
Time-series learning is the bread and butter of data-driven *clinical decision support*
Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a software toolkit.
Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
arXiv Detail & Related papers (2023-10-28T12:08:03Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Generating Explanations in Medical Question-Answering by Expectation
Maximization Inference over Evidence [33.018873142559286]
We propose a novel approach for generating natural language explanations for answers predicted by medical QA systems.
Our system extract knowledge from medical textbooks to enhance the quality of explanations during the explanation generation process.
arXiv Detail & Related papers (2023-10-02T16:00:37Z) - HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence
for Digital Medicine [7.089952396422835]
ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity.
As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular.
arXiv Detail & Related papers (2023-06-09T16:50:02Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability [3.04585143845864]
We present a new model-agnostic metric to measure the Degree of eXplainability of correct information in an objective way.
We designed a few experiments and a user-study on two realistic AI-based systems for healthcare and finance.
arXiv Detail & Related papers (2021-09-11T17:44:13Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Explainable AI meets Healthcare: A Study on Heart Disease Dataset [0.0]
The aim is to enlighten practitioners on the understandability and interpretability of explainable AI systems using a variety of techniques.
Our paper contains examples based on the heart disease dataset and elucidates on how the explainability techniques should be preferred to create trustworthiness.
arXiv Detail & Related papers (2020-11-06T05:18:43Z) - Interpretable Multi-Step Reasoning with Knowledge Extraction on Complex
Healthcare Question Answering [89.76059961309453]
HeadQA dataset contains multiple-choice questions authorized for the public healthcare specialization exam.
These questions are the most challenging for current QA systems.
We present a Multi-step reasoning with Knowledge extraction framework (MurKe)
We are striving to make full use of off-the-shelf pre-trained models.
arXiv Detail & Related papers (2020-08-06T02:47:46Z) - The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies [1.2762298148425795]
Lack of transparency is identified as one of the main barriers to implementation of AI systems in health care.
We review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems.
We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice.
arXiv Detail & Related papers (2020-07-31T09:08:27Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.