Transparency of Deep Neural Networks for Medical Image Analysis: A
Review of Interpretability Methods
- URL: http://arxiv.org/abs/2111.02398v1
- Date: Mon, 1 Nov 2021 01:42:26 GMT
- Title: Transparency of Deep Neural Networks for Medical Image Analysis: A
Review of Interpretability Methods
- Authors: Zohaib Salahuddin, Henry C Woodruff, Avishek Chatterjee and Philippe
Lambin
- Abstract summary: Deep neural networks have shown same or better performance than clinicians in many tasks.
Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision making process.
There is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow.
- Score: 3.3918638314432936
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence has emerged as a useful aid in numerous clinical
applications for diagnosis and treatment decisions. Deep neural networks have
shown same or better performance than clinicians in many tasks owing to the
rapid increase in the available data and computational power. In order to
conform to the principles of trustworthy AI, it is essential that the AI system
be transparent, robust, fair and ensure accountability. Current deep neural
solutions are referred to as black-boxes due to a lack of understanding of the
specifics concerning the decision making process. Therefore, there is a need to
ensure interpretability of deep neural networks before they can be incorporated
in the routine clinical workflow. In this narrative review, we utilized
systematic keyword searches and domain expertise to identify nine different
types of interpretability methods that have been used for understanding deep
learning models for medical image analysis applications based on the type of
generated explanations and technical similarities. Furthermore, we report the
progress made towards evaluating the explanations produced by various
interpretability methods. Finally we discuss limitations, provide guidelines
for using interpretability methods and future directions concerning the
interpretability of deep neural networks for medical imaging analysis.
Related papers
- Aligning Characteristic Descriptors with Images for Human-Expert-like Explainability [0.0]
In mission-critical domains such as law enforcement and medical diagnosis, the ability to explain and interpret the outputs of deep learning models is crucial.
We propose a novel approach that utilizes characteristic descriptors to explain model decisions by identifying their presence in images.
arXiv Detail & Related papers (2024-11-06T15:47:18Z) - DARE: Towards Robust Text Explanations in Biomedical and Healthcare
Applications [54.93807822347193]
We show how to adapt attribution robustness estimation methods to a given domain, so as to take into account domain-specific plausibility.
Next, we provide two methods, adversarial training and FAR training, to mitigate the brittleness characterized by DARE.
Finally, we empirically validate our methods with extensive experiments on three established biomedical benchmarks.
arXiv Detail & Related papers (2023-07-05T08:11:40Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Explainable Artificial Intelligence in Retinal Imaging for the detection
of Systemic Diseases [0.0]
This study aims to evaluate an explainable staged grading process without using deep Convolutional Neural Networks (CNNs) directly.
We have proposed a clinician-in-the-loop assisted intelligent workflow that performs a retinal vascular assessment on the fundus images.
The semiautomatic methodology aims to have a federated approach to AI in healthcare applications with more inputs and interpretations from clinicians.
arXiv Detail & Related papers (2022-12-14T07:00:31Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - An Interactive Interpretability System for Breast Cancer Screening with
Deep Learning [11.28741778902131]
We propose an interactive system to take advantage of state-of-the-art interpretability techniques to assist radiologists with breast cancer screening.
Our system integrates a deep learning model into the radiologists' workflow and provides novel interactions to promote understanding of the model's decision-making process.
arXiv Detail & Related papers (2022-09-30T02:19:49Z) - A Cognitive Explainer for Fetal ultrasound images classifier Based on
Medical Concepts [0.0]
We propose an interpretable framework based on key medical concepts.
We utilize a concept-based graph convolutional neural(GCN) network to construct the relationships between key medical concepts.
arXiv Detail & Related papers (2022-01-19T14:44:36Z) - Improving Interpretability of Deep Neural Networks in Medical Diagnosis
by Investigating the Individual Units [24.761080054980713]
We demonstrate the efficiency of recent attribution techniques to explain the diagnostic decision by visualizing the significant factors in the input image.
Our analysis of unmasking machine intelligence represents the necessity of explainability in the medical diagnostic decision.
arXiv Detail & Related papers (2021-07-19T11:49:31Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.