A Brief Review of Explainable Artificial Intelligence in Healthcare
- URL: http://arxiv.org/abs/2304.01543v1
- Date: Tue, 4 Apr 2023 05:41:57 GMT
- Title: A Brief Review of Explainable Artificial Intelligence in Healthcare
- Authors: Zahra Sadeghi, Roohallah Alizadehsani, Mehmet Akif Cifci, Samina
Kausar, Rizwan Rehman, Priyakshi Mahanta, Pranjal Kumar Bora, Ammar Almasri,
Rami S. Alkhawaldeh, Sadiq Hussain, Bilal Alatas, Afshin Shoeibi, Hossein
Moosaei, Milan Hladik, Saeid Nahavandi, Panos M. Pardalos
- Abstract summary: XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
- Score: 7.844015105790313
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: XAI refers to the techniques and methods for building AI applications which
assist end users to interpret output and predictions of AI models. Black box AI
applications in high-stakes decision-making situations, such as medical domain
have increased the demand for transparency and explainability since wrong
predictions may have severe consequences. Model explainability and
interpretability are vital successful deployment of AI models in healthcare
practices. AI applications' underlying reasoning needs to be transparent to
clinicians in order to gain their trust. This paper presents a systematic
review of XAI aspects and challenges in the healthcare domain. The primary
goals of this study are to review various XAI methods, their challenges, and
related machine learning models in healthcare. The methods are discussed under
six categories: Features-oriented methods, global methods, concept models,
surrogate models, local pixel-based methods, and human-centric methods. Most
importantly, the paper explores XAI role in healthcare problems to clarify its
necessity in safety-critical applications. The paper intends to establish a
comprehensive understanding of XAI-related applications in the healthcare field
by reviewing the related experimental results. To facilitate future research
for filling research gaps, the importance of XAI models from different
viewpoints and their limitations are investigated.
Related papers
- Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks [9.93411316886105]
Self-eXplainable AI (S-XAI) incorporates explainability directly into the training process of deep learning models.
This paper outlines the desired characteristics of explainability and existing evaluation methods for assessing explanation quality.
arXiv Detail & Related papers (2024-10-03T09:29:28Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - From Explainable to Interpretable Deep Learning for Natural Language Processing in Healthcare: How Far from Reality? [8.423877102146433]
"eXplainable and Interpretable Artificial Intelligence" (XIAI) is introduced to distinguish XAI from IAI.
Our analysis shows that attention mechanisms are the most prevalent emerging IAI technique.
The major challenges identified are that most XIAI does not explore "global" modelling processes, the lack of best practices, and the lack of systematic evaluation and benchmarks.
arXiv Detail & Related papers (2024-03-18T15:53:33Z) - A Review on Explainable Artificial Intelligence for Healthcare: Why,
How, and When? [0.0]
We give a systematic analysis of explainable artificial intelligence (XAI)
The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed.
We present an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields.
arXiv Detail & Related papers (2023-04-10T17:40:21Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A Comparative Approach to Explainable Artificial Intelligence Methods in
Application to High-Dimensional Electronic Health Records: Examining the
Usability of XAI [0.0]
XAI aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means.
The ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum.
XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level.
arXiv Detail & Related papers (2021-03-08T18:15:52Z) - Achievements and Challenges in Explaining Deep Learning based
Computer-Aided Diagnosis Systems [4.9449660544238085]
We discuss early achievements in development of explainable AI for validation of known disease criteria.
We highlight some of the remaining challenges that stand in the way of practical applications of AI as a clinical decision support tool.
arXiv Detail & Related papers (2020-11-26T08:08:19Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.