Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images
- URL: http://arxiv.org/abs/2303.08632v1
- Date: Wed, 15 Mar 2023 14:00:11 GMT
- Title: Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images
- Authors: Ario Sadafi, Oleksandra Adonkina, Ashkan Khakzar, Peter Lienemann,
Rudolf Matthias Hehr, Daniel Rueckert, Nassir Navab, Carsten Marr
- Abstract summary: We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
- Score: 52.527733226555206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability is a key requirement for computer-aided diagnosis systems in
clinical decision-making. Multiple instance learning with attention pooling
provides instance-level explainability, however for many clinical applications
a deeper, pixel-level explanation is desirable, but missing so far. In this
work, we investigate the use of four attribution methods to explain a multiple
instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP),
Information Bottleneck Attribution (IBA), and InputIBA. With this collection of
methods, we can derive pixel-level explanations on for the task of diagnosing
blood cancer from patients' blood smears. We study two datasets of acute
myeloid leukemia with over 100 000 single cell images and observe how each
attribution method performs on the multiple instance learning architecture
focusing on different properties of the white blood single cells. Additionally,
we compare attribution maps with the annotations of a medical expert to see how
the model's decision-making differs from the human standard. Our study
addresses the challenge of implementing pixel-level explainability in multiple
instance learning models and provides insights for clinicians to better
understand and trust decisions from computer-aided diagnosis systems.
Related papers
- Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics [63.106382317917344]
We report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases.
arXiv Detail & Related papers (2023-06-01T16:23:47Z) - Multi-Scale Attention-based Multiple Instance Learning for
Classification of Multi-Gigapixel Histology Images [0.0]
We propose a deep learning pipeline for classification in histology images.
We attempt to predict the latent membrane protein 1 (LMP1) status of nasopharyngeal carcinoma (NPC) based on haematoxylin and eosin-stain (H&E) histology images.
arXiv Detail & Related papers (2022-09-07T10:14:02Z) - Classification of White Blood Cell Leukemia with Low Number of
Interpretable and Explainable Features [0.0]
White Blood Cell (WBC) Leukaemia is detected through image-based classification.
Convolutional Neural Networks are used to learn the features needed to classify images of cells a malignant or healthy.
This type of model requires learning a large number of parameters and is difficult to interpret and explain.
We present an XAI model which uses only 24 explainable and interpretable features and is highly competitive to other approaches by outperforming them by about 4.38%.
arXiv Detail & Related papers (2022-01-28T00:08:56Z) - Towards Interpretable Attention Networks for Cervical Cancer Analysis [24.916577293892182]
We evaluate various state-of-the-art deep learning models for the classification of images of multiple cervical cells.
We show the effectiveness of the residual channel attention model for extracting important features from a group of cells.
It also provides interpretable models to address the classification of cervical cells.
arXiv Detail & Related papers (2021-05-27T13:28:24Z) - Relational Subsets Knowledge Distillation for Long-tailed Retinal
Diseases Recognition [65.77962788209103]
We propose class subset learning by dividing the long-tailed data into multiple class subsets according to prior knowledge.
It enforces the model to focus on learning the subset-specific knowledge.
The proposed framework proved to be effective for the long-tailed retinal diseases recognition task.
arXiv Detail & Related papers (2021-04-22T13:39:33Z) - IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography [20.665935997959025]
Interpretability in machine learning models is important in high-stakes decisions.
We present a framework for interpretable machine learning-based mammography.
arXiv Detail & Related papers (2021-03-23T05:00:21Z) - Medical Image Harmonization Using Deep Learning Based Canonical Mapping:
Toward Robust and Generalizable Learning in Imaging [4.396671464565882]
We propose a new paradigm in which data from a diverse range of acquisition conditions are "harmonized" to a common reference domain.
We test this approach on two example problems, namely MRI-based brain age prediction and classification of schizophrenia.
arXiv Detail & Related papers (2020-10-11T22:01:37Z) - Learning Binary Semantic Embedding for Histology Image Classification
and Retrieval [56.34863511025423]
We propose a novel method for Learning Binary Semantic Embedding (LBSE)
Based on the efficient and effective embedding, classification and retrieval are performed to provide interpretable computer-assisted diagnosis for histology images.
Experiments conducted on three benchmark datasets validate the superiority of LBSE under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:36:44Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.