Interpretable Medical Imagery Diagnosis with Self-Attentive
Transformers: A Review of Explainable AI for Health Care
- URL: http://arxiv.org/abs/2309.00252v1
- Date: Fri, 1 Sep 2023 05:01:52 GMT
- Title: Interpretable Medical Imagery Diagnosis with Self-Attentive
Transformers: A Review of Explainable AI for Health Care
- Authors: Tin Lai
- Abstract summary: Vision Transformers (ViT) have emerged as state-of-the-art computer vision models, benefiting from self-attention modules.
Deep-learning models are complex and are often treated as a "black box" that can cause uncertainty regarding how they operate.
This review summarises recent ViT advancements and interpretative approaches to understanding the decision-making process of ViT.
- Score: 2.7195102129095003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in artificial intelligence (AI) have facilitated its
widespread adoption in primary medical services, addressing the demand-supply
imbalance in healthcare. Vision Transformers (ViT) have emerged as
state-of-the-art computer vision models, benefiting from self-attention
modules. However, compared to traditional machine-learning approaches,
deep-learning models are complex and are often treated as a "black box" that
can cause uncertainty regarding how they operate. Explainable Artificial
Intelligence (XAI) refers to methods that explain and interpret machine
learning models' inner workings and how they come to decisions, which is
especially important in the medical domain to guide the healthcare
decision-making process. This review summarises recent ViT advancements and
interpretative approaches to understanding the decision-making process of ViT,
enabling transparency in medical diagnosis applications.
Related papers
- Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks [9.93411316886105]
Self-eXplainable AI (S-XAI) incorporates explainability directly into the training process of deep learning models.
This survey presents a comprehensive review across various image modalities and clinical applications.
arXiv Detail & Related papers (2024-10-03T09:29:28Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - XAI Renaissance: Redefining Interpretability in Medical Diagnostic
Models [0.0]
The XAI Renaissance aims to redefine the interpretability of medical diagnostic models.
XAI techniques empower healthcare professionals to understand, trust, and effectively utilize these models for accurate and reliable medical diagnoses.
arXiv Detail & Related papers (2023-06-02T16:42:20Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - HEAR4Health: A blueprint for making computer audition a staple of modern
healthcare [89.8799665638295]
Recent years have seen a rapid increase in digital medicine research in an attempt to transform traditional healthcare systems.
Computer audition can be seen to be lagging behind, at least in terms of commercial interest.
We categorise the advances needed in four key pillars: Hear, corresponding to the cornerstone technologies needed to analyse auditory signals in real-life conditions; Earlier, for the advances needed in computational and data efficiency; Attentively, for accounting to individual differences and handling the longitudinal nature of medical data.
arXiv Detail & Related papers (2023-01-25T09:25:08Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for
COVID-19 Screening With Chest Radiography [70.37371604119826]
Building AI models with trustworthiness is important especially in regulated areas such as healthcare.
Previous work uses convolutional neural networks as the backbone architecture, which has shown to be prone to over-caution and overconfidence in making decisions.
We propose a feature learning approach using Vision Transformers, which use an attention-based mechanism.
arXiv Detail & Related papers (2022-07-19T14:55:42Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - Achievements and Challenges in Explaining Deep Learning based
Computer-Aided Diagnosis Systems [4.9449660544238085]
We discuss early achievements in development of explainable AI for validation of known disease criteria.
We highlight some of the remaining challenges that stand in the way of practical applications of AI as a clinical decision support tool.
arXiv Detail & Related papers (2020-11-26T08:08:19Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.