BEL: A Bag Embedding Loss for Transformer enhances Multiple Instance
Whole Slide Image Classification
- URL: http://arxiv.org/abs/2303.01377v1
- Date: Thu, 2 Mar 2023 16:02:55 GMT
- Title: BEL: A Bag Embedding Loss for Transformer enhances Multiple Instance
Whole Slide Image Classification
- Authors: Daniel Sens and Ario Sadafi, Francesco Paolo Casale, Nassir Navab,
Carsten Marr
- Abstract summary: Bag Embedding Loss (BEL) forces the model to learn a discriminative bag-level representation by minimizing the distance between bag embeddings of the same class and maximizing the distance between different classes.
We show that with BEL, TransMIL outperforms the baseline models on both datasets.
- Score: 39.53132774980783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple Instance Learning (MIL) has become the predominant approach for
classification tasks on gigapixel histopathology whole slide images (WSIs).
Within the MIL framework, single WSIs (bags) are decomposed into patches
(instances), with only WSI-level annotation available. Recent MIL approaches
produce highly informative bag level representations by utilizing the
transformer architecture's ability to model the dependencies between instances.
However, when applied to high magnification datasets, problems emerge due to
the large number of instances and the weak supervisory learning signal. To
address this problem, we propose to additionally train transformers with a
novel Bag Embedding Loss (BEL). BEL forces the model to learn a discriminative
bag-level representation by minimizing the distance between bag embeddings of
the same class and maximizing the distance between different classes. We
evaluate BEL with the Transformer architecture TransMIL on two publicly
available histopathology datasets, BRACS and CAMELYON17. We show that with BEL,
TransMIL outperforms the baseline models on both datasets, thus contributing to
the clinically highly relevant AI-based tumor classification of histological
patient material.
Related papers
- MamMIL: Multiple Instance Learning for Whole Slide Images with State
Space Models [58.39336492765728]
pathological diagnosis, the gold standard for cancer diagnosis, has achieved superior performance by combining the Transformer with the multiple instance learning (MIL) framework using whole slide images (WSIs)
We propose a MamMIL framework for WSI classification by cooperating the selective structured state space model (i.e., Mamba) with MIL for the first time.
Specifically, to solve the problem that Mamba can only conduct unidirectional one-dimensional (1D) sequence modeling, we innovatively introduce a bidirectional state space model and a 2D context-aware block.
arXiv Detail & Related papers (2024-03-08T09:02:13Z) - Dual-Query Multiple Instance Learning for Dynamic Meta-Embedding based
Tumor Classification [5.121989578393729]
Whole slide image (WSI) assessment is a challenging and crucial step in cancer diagnosis and treatment planning.
Coarse-grained labels are easily accessible, which makes WSI classification an ideal use case for multiple instance learning (MIL)
We propose a novel embedding-based Dual-Query MIL pipeline (DQ-MIL)
arXiv Detail & Related papers (2023-07-14T17:06:49Z) - TPMIL: Trainable Prototype Enhanced Multiple Instance Learning for Whole
Slide Image Classification [13.195971707693365]
We develop a Trainable Prototype enhanced deep MIL framework for weakly supervised WSI classification.
Our method is able to reveal the correlations between different tumor subtypes through distances between corresponding trained prototypes.
We test our method on two WSI datasets and it achieves a new SOTA.
arXiv Detail & Related papers (2023-05-01T07:39:19Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Diagnose Like a Pathologist: Transformer-Enabled Hierarchical
Attention-Guided Multiple Instance Learning for Whole Slide Image
Classification [39.41442041007595]
Multiple Instance Learning and transformers are increasingly popular in histopathology Whole Slide Image (WSI) classification.
We propose a Hierarchical Attention-Guided Multiple Instance Learning framework to fully exploit the WSIs.
Within this framework, an Integrated Attention Transformer is proposed to further enhance the performance of the transformer.
arXiv Detail & Related papers (2023-01-19T15:38:43Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Feature Re-calibration based MIL for Whole Slide Image Classification [7.92885032436243]
Whole slide image (WSI) classification is a fundamental task for the diagnosis and treatment of diseases.
We propose to re-calibrate the distribution of a WSI bag (instances) by using the statistics of the max-instance (critical) feature.
We employ a position encoding module (PEM) to model spatial/morphological information, and perform pooling by multi-head self-attention (PSMA) with a Transformer encoder.
arXiv Detail & Related papers (2022-06-22T07:00:39Z) - ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for
Image Recognition and Beyond [76.35955924137986]
We propose a Vision Transformer Advanced by Exploring intrinsic IB from convolutions, i.e., ViTAE.
ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context.
We obtain the state-of-the-art classification performance, i.e., 88.5% Top-1 classification accuracy on ImageNet validation set and the best 91.2% Top-1 accuracy on ImageNet real validation set.
arXiv Detail & Related papers (2022-02-21T10:40:05Z) - ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias [76.16156833138038]
We propose a novel Vision Transformer Advanced by Exploring intrinsic IB from convolutions, ie, ViTAE.
ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context.
In each transformer layer, ViTAE has a convolution block in parallel to the multi-head self-attention module, whose features are fused and fed into the feed-forward network.
arXiv Detail & Related papers (2021-06-07T05:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.