Pseudo-Bag Mixup Augmentation for Multiple Instance Learning-Based Whole
Slide Image Classification
- URL: http://arxiv.org/abs/2306.16180v3
- Date: Thu, 2 Nov 2023 09:00:06 GMT
- Title: Pseudo-Bag Mixup Augmentation for Multiple Instance Learning-Based Whole
Slide Image Classification
- Authors: Pei Liu, Luping Ji, Xinyu Zhang, Feng Ye
- Abstract summary: We propose a new Pseudo-bag Mixup (PseMix) data augmentation scheme to improve the training of MIL models.
Our scheme generalizes the Mixup strategy for general images to special WSIs via pseudo-bags.
It is designed as an efficient and decoupled method, neither involving time-consuming operations nor relying on MIL model predictions.
- Score: 18.679580844360615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the special situation of modeling gigapixel images, multiple instance
learning (MIL) has become one of the most important frameworks for Whole Slide
Image (WSI) classification. In current practice, most MIL networks often face
two unavoidable problems in training: i) insufficient WSI data and ii) the
sample memorization inclination inherent in neural networks. These problems may
hinder MIL models from adequate and efficient training, suppressing the
continuous performance promotion of classification models on WSIs. Inspired by
the basic idea of Mixup, this paper proposes a new Pseudo-bag Mixup (PseMix)
data augmentation scheme to improve the training of MIL models. This scheme
generalizes the Mixup strategy for general images to special WSIs via
pseudo-bags so as to be applied in MIL-based WSI classification. Cooperated by
pseudo-bags, our PseMix fulfills the critical size alignment and semantic
alignment in Mixup strategy. Moreover, it is designed as an efficient and
decoupled method, neither involving time-consuming operations nor relying on
MIL model predictions. Comparative experiments and ablation studies are
specially designed to evaluate the effectiveness and advantages of our PseMix.
Experimental results show that PseMix could often assist state-of-the-art MIL
networks to refresh their classification performance on WSIs. Besides, it could
also boost the generalization performance of MIL models in special test
scenarios, and promote their robustness to patch occlusion and label noise. Our
source code is available at https://github.com/liupei101/PseMix.
Related papers
- Attention Is Not What You Need: Revisiting Multi-Instance Learning for Whole Slide Image Classification [51.95824566163554]
We argue that synergizing the standard MIL assumption with variational inference encourages the model to focus on tumour morphology instead of spurious correlations.
Our method also achieves better classification boundaries for identifying hard instances and mitigates the effect of spurious correlations between bags and labels.
arXiv Detail & Related papers (2024-08-18T12:15:22Z) - PreMix: Boosting Multiple Instance Learning in Digital Histopathology through Pre-training with Intra-Batch Slide Mixing [2.6703221234079946]
PreMix extends the general MIL framework by pre-training the MIL aggregator with an intra-batch slide mixing approach.
It achieves a mean of 4.7% performance improvement over the baseline MIL framework.
Ultimately, PreMix paves the way for more efficient and accurate WSI classification under limited WSI-labeled datasets.
arXiv Detail & Related papers (2024-08-02T10:24:35Z) - MamMIL: Multiple Instance Learning for Whole Slide Images with State Space Models [56.37780601189795]
We propose a framework named MamMIL for WSI analysis.
We represent each WSI as an undirected graph.
To address the problem that Mamba can only process 1D sequences, we propose a topology-aware scanning mechanism.
arXiv Detail & Related papers (2024-03-08T09:02:13Z) - Multiple Instance Learning Framework with Masked Hard Instance Mining
for Whole Slide Image Classification [11.996318969699296]
Masked hard instance mining (MHIM-MIL) is presented.
MHIM-MIL uses a Siamese structure (Teacher-Student) with a consistency constraint to explore potential hard instances.
Experimental results on the CAMELYON-16 and TCGA Lung Cancer datasets demonstrate that MHIM-MIL outperforms other latest methods in terms of performance and training cost.
arXiv Detail & Related papers (2023-07-28T01:40:04Z) - ReMix: A General and Efficient Framework for Multiple Instance Learning
based Whole Slide Image Classification [14.78430890440035]
Whole slide image (WSI) classification often relies on weakly supervised multiple instance learning (MIL) methods to handle gigapixel resolution images and slide-level labels.
We propose ReMix, a general and efficient framework for MIL based WSI classification.
arXiv Detail & Related papers (2022-07-05T04:21:35Z) - Feature Re-calibration based MIL for Whole Slide Image Classification [7.92885032436243]
Whole slide image (WSI) classification is a fundamental task for the diagnosis and treatment of diseases.
We propose to re-calibrate the distribution of a WSI bag (instances) by using the statistics of the max-instance (critical) feature.
We employ a position encoding module (PEM) to model spatial/morphological information, and perform pooling by multi-head self-attention (PSMA) with a Transformer encoder.
arXiv Detail & Related papers (2022-06-22T07:00:39Z) - DTFD-MIL: Double-Tier Feature Distillation Multiple Instance Learning
for Histopathology Whole Slide Image Classification [18.11776334311096]
Multiple instance learning (MIL) has been increasingly used in the classification of histopathology whole slide images (WSIs)
We propose to virtually enlarge the number of bags by introducing the concept of pseudo-bags.
We also contribute to deriving the instance probability under the framework of attention-based MIL, and utilize the derivation to help construct and analyze the proposed framework.
arXiv Detail & Related papers (2022-03-22T22:33:42Z) - Boosting Discriminative Visual Representation Learning with
Scenario-Agnostic Mixup [54.09898347820941]
We propose textbfScenario-textbfAgnostic textbfMixup (SAMix) for both Self-supervised Learning (SSL) and supervised learning (SL) scenarios.
Specifically, we hypothesize and verify the objective function of mixup generation as optimizing local smoothness between two mixed classes.
A label-free generation sub-network is designed, which effectively provides non-trivial mixup samples and improves transferable abilities.
arXiv Detail & Related papers (2021-11-30T14:49:59Z) - ReMix: Towards Image-to-Image Translation with Limited Data [154.71724970593036]
We propose a data augmentation method (ReMix) to tackle this issue.
We interpolate training samples at the feature level and propose a novel content loss based on the perceptual relations among samples.
The proposed approach effectively reduces the ambiguity of generation and renders content-preserving results.
arXiv Detail & Related papers (2021-03-31T06:24:10Z) - Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks [75.69896269357005]
Mixup is the latest data augmentation technique that linearly interpolates input examples and the corresponding labels.
In this paper, we explore how to apply mixup to natural language processing tasks.
We incorporate mixup to transformer-based pre-trained architecture, named "mixup-transformer", for a wide range of NLP tasks.
arXiv Detail & Related papers (2020-10-05T23:37:30Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.