TorchEsegeta: Framework for Interpretability and Explainability of
Image-based Deep Learning Models
- URL: http://arxiv.org/abs/2110.08429v1
- Date: Sat, 16 Oct 2021 01:00:15 GMT
- Title: TorchEsegeta: Framework for Interpretability and Explainability of
Image-based Deep Learning Models
- Authors: Soumick Chatterjee, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay,
Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen,
Oliver Speck and Andreas N\"urnberger
- Abstract summary: Clinicians are often sceptical about applying automatic image processing approaches, especially deep learning based methods, in practice.
This paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas which influence the decision of the algorithm most.
Research presents a unified framework, TorchEsegeta, for applying various interpretability and explainability techniques for deep learning models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clinicians are often very sceptical about applying automatic image processing
approaches, especially deep learning based methods, in practice. One main
reason for this is the black-box nature of these approaches and the inherent
problem of missing insights of the automatically derived decisions. In order to
increase trust in these methods, this paper presents approaches that help to
interpret and explain the results of deep learning algorithms by depicting the
anatomical areas which influence the decision of the algorithm most. Moreover,
this research presents a unified framework, TorchEsegeta, for applying various
interpretability and explainability techniques for deep learning models and
generate visual interpretations and explanations for clinicians to corroborate
their clinical findings. In addition, this will aid in gaining confidence in
such methods. The framework builds on existing interpretability and
explainability techniques that are currently focusing on classification models,
extending them to segmentation tasks. In addition, these methods have been
adapted to 3D models for volumetric analysis. The proposed framework provides
methods to quantitatively compare visual explanations using infidelity and
sensitivity metrics. This framework can be used by data scientists to perform
post-hoc interpretations and explanations of their models, develop more
explainable tools and present the findings to clinicians to increase their
faith in such models. The proposed framework was evaluated based on a use case
scenario of vessel segmentation models trained on Time-of-fight (TOF) Magnetic
Resonance Angiogram (MRA) images of the human brain. Quantitative and
qualitative results of a comparative study of different models and
interpretability methods are presented. Furthermore, this paper provides an
extensive overview of several existing interpretability and explainability
methods.
Related papers
- Aligning Characteristic Descriptors with Images for Human-Expert-like Explainability [0.0]
In mission-critical domains such as law enforcement and medical diagnosis, the ability to explain and interpret the outputs of deep learning models is crucial.
We propose a novel approach that utilizes characteristic descriptors to explain model decisions by identifying their presence in images.
arXiv Detail & Related papers (2024-11-06T15:47:18Z) - Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models [11.998309673666167]
MiMICRI helps users interactively select and replace segments of medical images that correspond to morphological structures.
From the counterfactuals generated, users can then assess the influence of each segment on model predictions.
We evaluate this library with two medical experts.
arXiv Detail & Related papers (2024-04-24T20:04:55Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Explainable Deep Learning Methods in Medical Image Classification: A
Survey [0.0]
State-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data.
These models are hardly adopted in clinical, mainly due to their lack of interpretability.
The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models.
arXiv Detail & Related papers (2022-05-10T09:28:14Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Quantifying Explainability in NLP and Analyzing Algorithms for
Performance-Explainability Tradeoff [0.0]
We explore the current art of explainability and interpretability within a case study in clinical text classification.
We demonstrate various visualization techniques for fully interpretable methods as well as model-agnostic post hoc attributions.
We introduce a framework through which practitioners and researchers can assess the frontier between a model's predictive performance and the quality of its available explanations.
arXiv Detail & Related papers (2021-07-12T19:07:24Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.