Differentiable Zooming for Multiple Instance Learning on Whole-Slide
Images
- URL: http://arxiv.org/abs/2204.12454v1
- Date: Tue, 26 Apr 2022 17:20:50 GMT
- Title: Differentiable Zooming for Multiple Instance Learning on Whole-Slide
Images
- Authors: Kevin Thandiackal, Boqi Chen, Pushpak Pati, Guillaume Jaume, Drew F.
K. Williamson, Maria Gabrani, Orcun Goksel
- Abstract summary: We propose ZoomMIL, a method that learns to perform multi-level zooming in an end-to-end manner.
The proposed method outperforms the state-of-the-art MIL methods in WSI classification on two large datasets.
- Score: 4.928363812223965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple Instance Learning (MIL) methods have become increasingly popular for
classifying giga-pixel sized Whole-Slide Images (WSIs) in digital pathology.
Most MIL methods operate at a single WSI magnification, by processing all the
tissue patches. Such a formulation induces high computational requirements, and
constrains the contextualization of the WSI-level representation to a single
scale. A few MIL methods extend to multiple scales, but are computationally
more demanding. In this paper, inspired by the pathological diagnostic process,
we propose ZoomMIL, a method that learns to perform multi-level zooming in an
end-to-end manner. ZoomMIL builds WSI representations by aggregating
tissue-context information from multiple magnifications. The proposed method
outperforms the state-of-the-art MIL methods in WSI classification on two large
datasets, while significantly reducing the computational demands with regard to
Floating-Point Operations (FLOPs) and processing time by up to 40x.
Related papers
- ZoomLDM: Latent Diffusion Model for multi-scale image generation [57.639937071834986]
We present ZoomLDM, a diffusion model tailored for generating images across multiple scales.
Central to our approach is a novel magnification-aware conditioning mechanism that utilizes self-supervised learning (SSL) embeddings.
ZoomLDM achieves state-of-the-art image generation quality across all scales, excelling in the data-scarce setting of generating thumbnails of entire large images.
arXiv Detail & Related papers (2024-11-25T22:39:22Z) - MamMIL: Multiple Instance Learning for Whole Slide Images with State Space Models [56.37780601189795]
We propose a framework named MamMIL for WSI analysis.
We represent each WSI as an undirected graph.
To address the problem that Mamba can only process 1D sequences, we propose a topology-aware scanning mechanism.
arXiv Detail & Related papers (2024-03-08T09:02:13Z) - Multi-level Multiple Instance Learning with Transformer for Whole Slide
Image Classification [32.43847786719133]
Whole slide image (WSI) refers to a type of high-resolution scanned tissue image, which is extensively employed in computer-assisted diagnosis (CAD)
We propose a Multi-level MIL (MMIL) scheme by introducing a hierarchical structure to MIL, which enables efficient handling of MIL tasks involving a large number of instances.
Based on MMIL, we instantiated MMIL-Transformer, an efficient Transformer model with windowed exact self-attention for large-scale MIL tasks.
arXiv Detail & Related papers (2023-06-08T08:29:10Z) - Cross-scale Multi-instance Learning for Pathological Image Diagnosis [20.519711186151635]
Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects.
We propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis.
arXiv Detail & Related papers (2023-04-01T03:52:52Z) - A Dual-branch Self-supervised Representation Learning Framework for
Tumour Segmentation in Whole Slide Images [12.961686610789416]
Self-supervised learning (SSL) has emerged as an alternative solution to reduce the annotation overheads in whole slide images.
These SSL approaches are not designed for handling multi-resolution WSIs, which limits their performance in learning discriminative image features.
We propose a Dual-branch SSL Framework for WSI tumour segmentation (DSF-WSI) that can effectively learn image features from multi-resolution WSIs.
arXiv Detail & Related papers (2023-03-20T10:57:28Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Embedding Space Augmentation for Weakly Supervised Learning in
Whole-Slide Images [3.858809922365453]
Multiple Instance Learning (MIL) is a widely employed framework for learning on gigapixel whole-slide images (WSIs) from WSI-level annotations.
We present EmbAugmenter, a data augmentation generative adversarial network (DA-GAN) that can synthesize data augmentations in the embedding space rather than in the pixel space.
Our approach outperforms MIL without augmentation and is on par with traditional patch-level augmentation for MIL training while being substantially faster.
arXiv Detail & Related papers (2022-10-31T02:06:39Z) - DTFD-MIL: Double-Tier Feature Distillation Multiple Instance Learning
for Histopathology Whole Slide Image Classification [18.11776334311096]
Multiple instance learning (MIL) has been increasingly used in the classification of histopathology whole slide images (WSIs)
We propose to virtually enlarge the number of bags by introducing the concept of pseudo-bags.
We also contribute to deriving the instance probability under the framework of attention-based MIL, and utilize the derivation to help construct and analyze the proposed framework.
arXiv Detail & Related papers (2022-03-22T22:33:42Z) - Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction [138.04956118993934]
We propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST)
CST embedding HSI sparsity into deep learning for HSI reconstruction.
In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing.
arXiv Detail & Related papers (2022-03-09T16:17:47Z) - FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning [64.32306537419498]
We propose a novel learned feature-based refinement and augmentation method that produces a varied set of complex transformations.
These transformations also use information from both within-class and across-class representations that we extract through clustering.
We demonstrate that our method is comparable to current state of art for smaller datasets while being able to scale up to larger datasets.
arXiv Detail & Related papers (2020-07-16T17:55:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.