Achievements and Challenges in Explaining Deep Learning based
Computer-Aided Diagnosis Systems
- URL: http://arxiv.org/abs/2011.13169v1
- Date: Thu, 26 Nov 2020 08:08:19 GMT
- Title: Achievements and Challenges in Explaining Deep Learning based
Computer-Aided Diagnosis Systems
- Authors: Adriano Lucieri, Muhammad Naseer Bajwa, Andreas Dengel, Sheraz Ahmed
- Abstract summary: We discuss early achievements in development of explainable AI for validation of known disease criteria.
We highlight some of the remaining challenges that stand in the way of practical applications of AI as a clinical decision support tool.
- Score: 4.9449660544238085
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Remarkable success of modern image-based AI methods and the resulting
interest in their applications in critical decision-making processes has led to
a surge in efforts to make such intelligent systems transparent and
explainable. The need for explainable AI does not stem only from ethical and
moral grounds but also from stricter legislation around the world mandating
clear and justifiable explanations of any decision taken or assisted by AI.
Especially in the medical context where Computer-Aided Diagnosis can have a
direct influence on the treatment and well-being of patients, transparency is
of utmost importance for safe transition from lab research to real world
clinical practice. This paper provides a comprehensive overview of current
state-of-the-art in explaining and interpreting Deep Learning based algorithms
in applications of medical research and diagnosis of diseases. We discuss early
achievements in development of explainable AI for validation of known disease
criteria, exploration of new potential biomarkers, as well as methods for the
subsequent correction of AI models. Various explanation methods like visual,
textual, post-hoc, ante-hoc, local and global have been thoroughly and
critically analyzed. Subsequently, we also highlight some of the remaining
challenges that stand in the way of practical applications of AI as a clinical
decision support tool and provide recommendations for the direction of future
research.
Related papers
- Artificial intelligence techniques in inherited retinal diseases: A review [19.107474958408847]
Inherited retinal diseases (IRDs) are a diverse group of genetic disorders that lead to progressive vision loss and are a major cause of blindness in working-age adults.
Recent advancements in artificial intelligence (AI) offer promising solutions to these challenges.
This review consolidates existing studies, identifies gaps, and provides an overview of AI's potential in diagnosing and managing IRDs.
arXiv Detail & Related papers (2024-10-10T03:14:51Z) - AI-Driven Healthcare: A Survey on Ensuring Fairness and Mitigating Bias [2.398440840890111]
AI applications have significantly improved diagnostic accuracy, treatment personalization, and patient outcome predictions.
These advancements also introduce substantial ethical and fairness challenges.
These biases can lead to disparities in healthcare delivery, affecting diagnostic accuracy and treatment outcomes across different demographic groups.
arXiv Detail & Related papers (2024-07-29T02:39:17Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Interpretable Medical Imagery Diagnosis with Self-Attentive
Transformers: A Review of Explainable AI for Health Care [2.7195102129095003]
Vision Transformers (ViT) have emerged as state-of-the-art computer vision models, benefiting from self-attention modules.
Deep-learning models are complex and are often treated as a "black box" that can cause uncertainty regarding how they operate.
This review summarises recent ViT advancements and interpretative approaches to understanding the decision-making process of ViT.
arXiv Detail & Related papers (2023-09-01T05:01:52Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can
Existing Algorithms Fulfill Clinical Requirements? [42.75635888823057]
Heatmap is a form of explanation that highlights important features for AI models' prediction.
It is unknown how well heatmaps perform on explaining decisions on multi-modal medical images.
We propose the modality-specific feature importance (MSFI) metric to tackle this clinically important but technically ignored problem.
arXiv Detail & Related papers (2022-03-12T17:18:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.