TPMIL: Trainable Prototype Enhanced Multiple Instance Learning for Whole
Slide Image Classification
- URL: http://arxiv.org/abs/2305.00696v1
- Date: Mon, 1 May 2023 07:39:19 GMT
- Title: TPMIL: Trainable Prototype Enhanced Multiple Instance Learning for Whole
Slide Image Classification
- Authors: Litao Yang, Deval Mehta, Sidong Liu, Dwarikanath Mahapatra, Antonio Di
Ieva, Zongyuan Ge
- Abstract summary: We develop a Trainable Prototype enhanced deep MIL framework for weakly supervised WSI classification.
Our method is able to reveal the correlations between different tumor subtypes through distances between corresponding trained prototypes.
We test our method on two WSI datasets and it achieves a new SOTA.
- Score: 13.195971707693365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital pathology based on whole slide images (WSIs) plays a key role in
cancer diagnosis and clinical practice. Due to the high resolution of the WSI
and the unavailability of patch-level annotations, WSI classification is
usually formulated as a weakly supervised problem, which relies on multiple
instance learning (MIL) based on patches of a WSI. In this paper, we aim to
learn an optimal patch-level feature space by integrating prototype learning
with MIL. To this end, we develop a Trainable Prototype enhanced deep MIL
(TPMIL) framework for weakly supervised WSI classification. In contrast to the
conventional methods which rely on a certain number of selected patches for
feature space refinement, we softly cluster all the instances by allocating
them to their corresponding prototypes. Additionally, our method is able to
reveal the correlations between different tumor subtypes through distances
between corresponding trained prototypes. More importantly, TPMIL also enables
to provide a more accurate interpretability based on the distance of the
instances from the trained prototypes which serves as an alternative to the
conventional attention score-based interpretability. We test our method on two
WSI datasets and it achieves a new SOTA. GitHub repository:
https://github.com/LitaoYang-Jet/TPMIL
Related papers
- Queryable Prototype Multiple Instance Learning with Vision-Language Models for Incremental Whole Slide Image Classification [10.667645628712542]
This paper proposes the first Vision-Language-based framework with Queryable Prototype Multiple Instance Learning (QPMIL-VL) specially designed for incremental WSI classification.
experiments on four TCGA datasets demonstrate that our QPMIL-VL framework is effective for incremental WSI classification.
arXiv Detail & Related papers (2024-10-14T14:49:34Z) - Attention Is Not What You Need: Revisiting Multi-Instance Learning for Whole Slide Image Classification [51.95824566163554]
We argue that synergizing the standard MIL assumption with variational inference encourages the model to focus on tumour morphology instead of spurious correlations.
Our method also achieves better classification boundaries for identifying hard instances and mitigates the effect of spurious correlations between bags and labels.
arXiv Detail & Related papers (2024-08-18T12:15:22Z) - Rethinking Pre-trained Feature Extractor Selection in Multiple Instance Learning for Whole Slide Image Classification [2.6703221234079946]
Multiple instance learning (MIL) has become a preferred method for classifying gigapixel whole slide images (WSIs)
This study examines MIL feature extractors across three dimensions: pre-training dataset, backbone model, and pre-training method.
arXiv Detail & Related papers (2024-08-02T10:34:23Z) - MamMIL: Multiple Instance Learning for Whole Slide Images with State Space Models [56.37780601189795]
We propose a framework named MamMIL for WSI analysis.
We represent each WSI as an undirected graph.
To address the problem that Mamba can only process 1D sequences, we propose a topology-aware scanning mechanism.
arXiv Detail & Related papers (2024-03-08T09:02:13Z) - A self-supervised framework for learning whole slide representations [52.774822784847565]
We present Slide Pre-trained Transformers (SPT) for gigapixel-scale self-supervision of whole slide images.
We benchmark SPT visual representations on five diagnostic tasks across three biomedical microscopy datasets.
arXiv Detail & Related papers (2024-02-09T05:05:28Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Feature Re-calibration based MIL for Whole Slide Image Classification [7.92885032436243]
Whole slide image (WSI) classification is a fundamental task for the diagnosis and treatment of diseases.
We propose to re-calibrate the distribution of a WSI bag (instances) by using the statistics of the max-instance (critical) feature.
We employ a position encoding module (PEM) to model spatial/morphological information, and perform pooling by multi-head self-attention (PSMA) with a Transformer encoder.
arXiv Detail & Related papers (2022-06-22T07:00:39Z) - DGMIL: Distribution Guided Multiple Instance Learning for Whole Slide
Image Classification [9.950131528559211]
We propose a feature distribution guided deep MIL framework for WSI classification and positive patch localization.
Experiments on the CAMELYON16 dataset and the TCGA Lung Cancer dataset show that our method achieves new SOTA for both global classification and positive patch localization tasks.
arXiv Detail & Related papers (2022-06-17T16:04:30Z) - Dual-stream Multiple Instance Learning Network for Whole Slide Image
Classification with Self-supervised Contrastive Learning [16.84711797934138]
We address the challenging problem of whole slide image (WSI) classification.
WSI classification can be cast as a multiple instance learning (MIL) problem when only slide-level labels are available.
We propose a MIL-based method for WSI classification and tumor detection that does not require localized annotations.
arXiv Detail & Related papers (2020-11-17T20:51:15Z) - Improving Few-shot Learning by Spatially-aware Matching and
CrossTransformer [116.46533207849619]
We study the impact of scale and location mismatch in the few-shot learning scenario.
We propose a novel Spatially-aware Matching scheme to effectively perform matching across multiple scales and locations.
arXiv Detail & Related papers (2020-01-06T14:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.