Unbox the Black-box for the Medical Explainable AI via Multi-modal and
Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
- URL: http://arxiv.org/abs/2102.01998v1
- Date: Wed, 3 Feb 2021 10:56:58 GMT
- Title: Unbox the Black-box for the Medical Explainable AI via Multi-modal and
Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
- Authors: Guang Yang, Qinghao Ye, Jun Xia
- Abstract summary: Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made.
Many of the machine learning algorithms can not manifest how and why a decision has been cast.
XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies.
- Score: 3.4031539425106683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable Artificial Intelligence (XAI) is an emerging research topic of
machine learning aimed at unboxing how AI systems' black-box choices are made.
This research field inspects the measures and models involved in
decision-making and seeks solutions to explain them explicitly. Many of the
machine learning algorithms can not manifest how and why a decision has been
cast. This is particularly true of the most popular deep neural network
approaches currently in use. Consequently, our confidence in AI systems can be
hindered by the lack of explainability in these black-box models. The XAI
becomes more and more crucial for deep learning powered applications,
especially for medical and healthcare studies, although in general these deep
neural networks can return an arresting dividend in performance. The
insufficient explainability and transparency in most existing AI systems can be
one of the major reasons that successful implementation and integration of AI
tools into routine clinical practice are uncommon. In this study, we first
surveyed the current progress of XAI and in particular its advances in
healthcare applications. We then introduced our solutions for XAI leveraging
multi-modal and multi-centre data fusion, and subsequently validated in two
showcases following real clinical scenarios. Comprehensive quantitative and
qualitative analyses can prove the efficacy of our proposed XAI solutions, from
which we can envisage successful applications in a broader range of clinical
questions.
Related papers
- Automated Ensemble Multimodal Machine Learning for Healthcare [52.500923923797835]
We introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning.
AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies.
arXiv Detail & Related papers (2024-07-25T17:46:38Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - Explainable artificial intelligence for Healthcare applications using
Random Forest Classifier with LIME and SHAP [0.0]
There is a pressing need to understand the computational details hidden in black box AI techniques.
The origin of explainable AI (xAI) is coined from these challenges.
This book provides an in-depth analysis of several xAI frameworks and methods.
arXiv Detail & Related papers (2023-11-09T11:43:10Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Explainable AI applications in the Medical Domain: a systematic review [1.4419517737536707]
The field of Medical AI faces various challenges, in terms of building user trust, complying with regulations, using data ethically.
This paper presents a literature review on the recent developments of XAI solutions for medical decision support, based on a representative sample of 198 articles published in recent years.
arXiv Detail & Related papers (2023-08-10T08:12:17Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Analysis of Explainable Artificial Intelligence Methods on Medical Image
Classification [0.0]
The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems.
Medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks.
The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI)
arXiv Detail & Related papers (2022-12-10T06:17:43Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainable Artificial Intelligence Approaches: A Survey [0.22940141855172028]
Lack of explainability of a decision from an Artificial Intelligence based "black box" system/model is a key stumbling block for adopting AI in high stakes applications.
We demonstrate popular Explainable Artificial Intelligence (XAI) methods with a mutual case study/task.
We analyze for competitive advantages from multiple perspectives.
We recommend paths towards responsible or human-centered AI using XAI as a medium.
arXiv Detail & Related papers (2021-01-23T06:15:34Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.