From slides (through tiles) to pixels: an explainability framework for
weakly supervised models in pre-clinical pathology
- URL: http://arxiv.org/abs/2302.01653v1
- Date: Fri, 3 Feb 2023 10:57:21 GMT
- Title: From slides (through tiles) to pixels: an explainability framework for
weakly supervised models in pre-clinical pathology
- Authors: Marco Bertolini, Van-Khoa Le, Jake Pencharz, Andreas Poehlmann,
Djork-Arn\'e Clevert, Santiago Villalba, Floriane Montanari
- Abstract summary: We propose a novel eXplainable AI (XAI) framework and its application to deep learning models trained on Whole Slide Images (WSIs) in Digital Pathology.
Specifically, we apply our methods to a multi-instance-learning (MIL) model, which is trained solely on slide-level labels.
We show that the explanations on important tiles of the whole slide correlate with tissue changes between healthy regions and lesions, but do not behave like a human annotator.
- Score: 1.53934570513443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In pre-clinical pathology, there is a paradox between the abundance of raw
data (whole slide images from many organs of many individual animals) and the
lack of pixel-level slide annotations done by pathologists. Due to time
constraints and requirements from regulatory authorities, diagnoses are instead
stored as slide labels. Weakly supervised training is designed to take
advantage of those data, and the trained models can be used by pathologists to
rank slides by their probability of containing a given lesion of interest. In
this work, we propose a novel contextualized eXplainable AI (XAI) framework and
its application to deep learning models trained on Whole Slide Images (WSIs) in
Digital Pathology. Specifically, we apply our methods to a
multi-instance-learning (MIL) model, which is trained solely on slide-level
labels, without the need for pixel-level annotations. We validate
quantitatively our methods by quantifying the agreements of our explanations'
heatmaps with pathologists' annotations, as well as with predictions from a
segmentation model trained on such annotations. We demonstrate the stability of
the explanations with respect to input shifts, and the fidelity with respect to
increased model performance. We quantitatively evaluate the correlation between
available pixel-wise annotations and explainability heatmaps. We show that the
explanations on important tiles of the whole slide correlate with tissue
changes between healthy regions and lesions, but do not exactly behave like a
human annotator. This result is coherent with the model training strategy.
Related papers
- Pathology-knowledge Enhanced Multi-instance Prompt Learning for Few-shot Whole Slide Image Classification [19.070685830687285]
In clinical settings, restricted access to pathology slides is inevitable due to patient privacy concerns and the prevalence of rare or emerging diseases.
This paper proposes a multi-instance prompt learning framework enhanced with pathology knowledge.
Our method demonstrates superior performance in three challenging clinical tasks, significantly outperforming comparative few-shot methods.
arXiv Detail & Related papers (2024-07-15T15:31:55Z) - PRISM: A Multi-Modal Generative Foundation Model for Slide-Level Histopathology [9.556246087301883]
We present a slide-level foundation model for H&E-stained histopathology, PRISM, that builds on Virchow tile embeddings.
PRISM produces slide-level embeddings with the ability to generate clinical reports, resulting in several modes of use.
Using text prompts, PRISM achieves zero-shot cancer detection and sub-typing performance approaching that of a supervised aggregator model.
arXiv Detail & Related papers (2024-05-16T16:59:12Z) - COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images [3.5418498524791766]
This research is development of a novel counterfactual inpainting approach (COIN)
COIN flips the predicted classification label from abnormal to normal by using a generative model.
The effectiveness of the method is demonstrated by segmenting synthetic targets and actual kidney tumors from CT images acquired from Tartu University Hospital in Estonia.
arXiv Detail & Related papers (2024-04-19T12:09:49Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Scaling Laws of Synthetic Images for Model Training ... for Now [54.43596959598466]
We study the scaling laws of synthetic images generated by state of the art text-to-image models.
We observe that synthetic images demonstrate a scaling trend similar to, but slightly less effective than, real images in CLIP training.
arXiv Detail & Related papers (2023-12-07T18:59:59Z) - SegPrompt: Using Segmentation Map as a Better Prompt to Finetune Deep
Models for Kidney Stone Classification [62.403510793388705]
Deep learning has produced encouraging results for kidney stone classification using endoscope images.
The shortage of annotated training data poses a severe problem in improving the performance and generalization ability of the trained model.
We propose SegPrompt to alleviate the data shortage problems by exploiting segmentation maps from two aspects.
arXiv Detail & Related papers (2023-03-15T01:30:48Z) - A Generalist Framework for Panoptic Segmentation of Images and Videos [61.61453194912186]
We formulate panoptic segmentation as a discrete data generation problem, without relying on inductive bias of the task.
A diffusion model is proposed to model panoptic masks, with a simple architecture and generic loss function.
Our method is capable of modeling video (in a streaming setting) and thereby learns to track object instances automatically.
arXiv Detail & Related papers (2022-10-12T16:18:25Z) - A Deep Reinforcement Learning Framework for Rapid Diagnosis of Whole
Slide Pathological Images [4.501311544043762]
We propose a weakly supervised deep reinforcement learning framework, which can greatly reduce the time required for network inference.
We use neural network to construct the search model and decision model of reinforcement learning agent respectively.
Experimental results show that our proposed method can achieve fast inference and accurate prediction of whole slide images without any pixel-level annotations.
arXiv Detail & Related papers (2022-05-05T14:20:29Z) - Going Beyond Saliency Maps: Training Deep Models to Interpret Deep
Models [16.218680291606628]
Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders.
We propose to train simulator networks that can warp a given image to inject or remove patterns of the disease.
We apply our approach to interpreting classifiers trained on a synthetic dataset and two neuroimaging datasets to visualize the effect of the Alzheimer's disease and alcohol use disorder.
arXiv Detail & Related papers (2021-02-16T15:57:37Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.