Explainable AI applications in the Medical Domain: a systematic review
- URL: http://arxiv.org/abs/2308.05411v1
- Date: Thu, 10 Aug 2023 08:12:17 GMT
- Title: Explainable AI applications in the Medical Domain: a systematic review
- Authors: Nicoletta Prentzas, Antonis Kakas, and Constantinos S. Pattichis
- Abstract summary: The field of Medical AI faces various challenges, in terms of building user trust, complying with regulations, using data ethically.
This paper presents a literature review on the recent developments of XAI solutions for medical decision support, based on a representative sample of 198 articles published in recent years.
- Score: 1.4419517737536707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence in Medicine has made significant progress with
emerging applications in medical imaging, patient care, and other areas. While
these applications have proven successful in retrospective studies, very few of
them were applied in practice.The field of Medical AI faces various challenges,
in terms of building user trust, complying with regulations, using data
ethically.Explainable AI (XAI) aims to enable humans understand AI and trust
its results. This paper presents a literature review on the recent developments
of XAI solutions for medical decision support, based on a representative sample
of 198 articles published in recent years. The systematic synthesis of the
relevant articles resulted in several findings. (1) model-agnostic XAI
techniques were mostly employed in these solutions, (2) deep learning models
are utilized more than other types of machine learning models, (3)
explainability was applied to promote trust, but very few works reported the
physicians participation in the loop, (4) visual and interactive user interface
is more useful in understanding the explanation and the recommendation of the
system. More research is needed in collaboration between medical and AI
experts, that could guide the development of suitable frameworks for the
design, implementation, and evaluation of XAI solutions in medicine.
Related papers
- A Survey of Models for Cognitive Diagnosis: New Developments and Future Directions [66.40362209055023]
This paper aims to provide a survey of current models for cognitive diagnosis, with more attention on new developments using machine learning-based methods.
By comparing the model structures, parameter estimation algorithms, model evaluation methods and applications, we provide a relatively comprehensive review of the recent trends in cognitive diagnosis models.
arXiv Detail & Related papers (2024-07-07T18:02:00Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - A Review on Explainable Artificial Intelligence for Healthcare: Why,
How, and When? [0.0]
We give a systematic analysis of explainable artificial intelligence (XAI)
The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed.
We present an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields.
arXiv Detail & Related papers (2023-04-10T17:40:21Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Current State of Community-Driven Radiological AI Deployment in Medical
Imaging [1.474525456020066]
This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium.
We identify barriers between AI-model development in research labs and subsequent clinical deployment.
We discuss various AI integration points in a clinical Radiology workflow.
arXiv Detail & Related papers (2022-12-29T05:17:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - AutoPrognosis 2.0: Democratizing Diagnostic and Prognostic Modeling in
Healthcare with Automated Machine Learning [72.2614468437919]
We present a machine learning framework, AutoPrognosis 2.0, to develop diagnostic and prognostic models.
We provide an illustrative application where we construct a prognostic risk score for diabetes using the UK Biobank.
Our risk score has been implemented as a web-based decision support tool and can be publicly accessed by patients and clinicians worldwide.
arXiv Detail & Related papers (2022-10-21T16:31:46Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Who Goes First? Influences of Human-AI Workflow on Decision Making in
Clinical Imaging [24.911186503082465]
This study explores the effects of providing AI assistance at the start of a diagnostic session in radiology versus after the radiologist has made a provisional decision.
We found that participants who are asked to register provisional responses in advance of reviewing AI inferences are less likely to agree with the AI regardless of whether the advice is accurate and, in instances of disagreement with the AI, are less likely to seek the second opinion of a colleague.
arXiv Detail & Related papers (2022-05-19T16:59:25Z) - Unbox the Black-box for the Medical Explainable AI via Multi-modal and
Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond [3.4031539425106683]
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made.
Many of the machine learning algorithms can not manifest how and why a decision has been cast.
XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies.
arXiv Detail & Related papers (2021-02-03T10:56:58Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.