The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies
- URL: http://arxiv.org/abs/2007.15911v2
- Date: Tue, 5 Jan 2021 08:32:38 GMT
- Title: The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies
- Authors: Aniek F. Markus, Jan A. Kors, Peter R. Rijnbeek
- Abstract summary: Lack of transparency is identified as one of the main barriers to implementation of AI systems in health care.
We review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems.
We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice.
- Score: 1.2762298148425795
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) has huge potential to improve the health and
well-being of people, but adoption in clinical practice is still limited. Lack
of transparency is identified as one of the main barriers to implementation, as
clinicians should be confident the AI system can be trusted. Explainable AI has
the potential to overcome this issue and can be a step towards trustworthy AI.
In this paper we review the recent literature to provide guidance to
researchers and practitioners on the design of explainable AI systems for the
health-care domain and contribute to formalization of the field of explainable
AI. We argue the reason to demand explainability determines what should be
explained as this determines the relative importance of the properties of
explainability (i.e. interpretability and fidelity). Based on this, we propose
a framework to guide the choice between classes of explainable AI methods
(explainable modelling versus post-hoc explanation; model-based,
attribution-based, or example-based explanations; global and local
explanations). Furthermore, we find that quantitative evaluation metrics, which
are important for objective standardized evaluation, are still lacking for some
properties (e.g. clarity) and types of explanations (e.g. example-based
methods). We conclude that explainable modelling can contribute to trustworthy
AI, but the benefits of explainability still need to be proven in practice and
complementary measures might be needed to create trustworthy AI in health care
(e.g. reporting data quality, performing extensive (external) validation, and
regulation).
Related papers
- The Explanation Necessity for Healthcare AI [3.8953842074141387]
We propose a novel categorization system with four distinct classes of explanation necessity.
Three key factors are considered: the robustness of the evaluation protocol, the variability of expert observations, and the representation dimensionality of the application.
arXiv Detail & Related papers (2024-05-31T22:20:10Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - A New Perspective on Evaluation Methods for Explainable Artificial
Intelligence (XAI) [0.0]
We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk.
This work aims to advance the field of Requirements Engineering for AI.
arXiv Detail & Related papers (2023-07-26T15:15:44Z) - HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence
for Digital Medicine [7.089952396422835]
ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity.
As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular.
arXiv Detail & Related papers (2023-06-09T16:50:02Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability [3.04585143845864]
We present a new model-agnostic metric to measure the Degree of eXplainability of correct information in an objective way.
We designed a few experiments and a user-study on two realistic AI-based systems for healthcare and finance.
arXiv Detail & Related papers (2021-09-11T17:44:13Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Explainable AI meets Healthcare: A Study on Heart Disease Dataset [0.0]
The aim is to enlighten practitioners on the understandability and interpretability of explainable AI systems using a variety of techniques.
Our paper contains examples based on the heart disease dataset and elucidates on how the explainability techniques should be preferred to create trustworthiness.
arXiv Detail & Related papers (2020-11-06T05:18:43Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.