Explainable Deep Learning Methods in Medical Image Classification: A
Survey
- URL: http://arxiv.org/abs/2205.04766v3
- Date: Tue, 19 Sep 2023 13:03:24 GMT
- Title: Explainable Deep Learning Methods in Medical Image Classification: A
Survey
- Authors: Cristiano Patr\'icio, Jo\~ao C. Neves, Lu\'is F. Teixeira
- Abstract summary: State-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data.
These models are hardly adopted in clinical, mainly due to their lack of interpretability.
The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The remarkable success of deep learning has prompted interest in its
application to medical imaging diagnosis. Even though state-of-the-art deep
learning models have achieved human-level accuracy on the classification of
different types of medical data, these models are hardly adopted in clinical
workflows, mainly due to their lack of interpretability. The black-box-ness of
deep learning models has raised the need for devising strategies to explain the
decision process of these models, leading to the creation of the topic of
eXplainable Artificial Intelligence (XAI). In this context, we provide a
thorough survey of XAI applied to medical imaging diagnosis, including visual,
textual, example-based and concept-based explanation methods. Moreover, this
work reviews the existing medical imaging datasets and the existing metrics for
evaluating the quality of the explanations. In addition, we include a
performance comparison among a set of report generation-based methods. Finally,
the major challenges in applying XAI to medical imaging and the future research
directions on the topic are also discussed.
Related papers
- Aligning Human Knowledge with Visual Concepts Towards Explainable Medical Image Classification [8.382606243533942]
We introduce a simple yet effective framework, Explicd, towards Explainable language-informed criteria-based diagnosis.
By leveraging a pretrained vision-language model, Explicd injects these criteria into the embedding space as knowledge anchors.
The final diagnostic outcome is determined based on the similarity scores between the encoded visual concepts and the textual criteria embeddings.
arXiv Detail & Related papers (2024-06-08T23:23:28Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Deep Learning Approaches for Data Augmentation in Medical Imaging: A
Review [2.8145809047875066]
We focus on three types of deep generative models for medical image augmentation: variational autoencoders, generative adversarial networks, and diffusion models.
We provide an overview of the current state of the art in each of these models and discuss their potential for use in different downstream tasks in medical imaging, including classification, segmentation, and cross-modal translation.
Our goal is to provide a comprehensive review about the use of deep generative models for medical image augmentation and to highlight the potential of these models for improving the performance of deep learning algorithms in medical image analysis.
arXiv Detail & Related papers (2023-07-24T20:53:59Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - TorchEsegeta: Framework for Interpretability and Explainability of
Image-based Deep Learning Models [0.0]
Clinicians are often sceptical about applying automatic image processing approaches, especially deep learning based methods, in practice.
This paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas which influence the decision of the algorithm most.
Research presents a unified framework, TorchEsegeta, for applying various interpretability and explainability techniques for deep learning models.
arXiv Detail & Related papers (2021-10-16T01:00:15Z) - Recent advances and clinical applications of deep learning in medical
image analysis [7.132678647070632]
We reviewed and summarized more than 200 recently published papers to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks.
Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical images.
arXiv Detail & Related papers (2021-05-27T18:05:12Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Learning Binary Semantic Embedding for Histology Image Classification
and Retrieval [56.34863511025423]
We propose a novel method for Learning Binary Semantic Embedding (LBSE)
Based on the efficient and effective embedding, classification and retrieval are performed to provide interpretable computer-assisted diagnosis for histology images.
Experiments conducted on three benchmark datasets validate the superiority of LBSE under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:36:44Z) - Explainable deep learning models in medical image analysis [0.0]
Methods have been very effective for a variety of medical diagnostic tasks and has even beaten human experts on some of those.
Recent explainability studies aim to show the features that influence the decision of a model the most.
A review of the current applications of explainable deep learning for different medical imaging tasks is presented here.
arXiv Detail & Related papers (2020-05-28T06:31:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.