Multiple Instance Learning Framework with Masked Hard Instance Mining
for Whole Slide Image Classification
- URL: http://arxiv.org/abs/2307.15254v3
- Date: Thu, 21 Dec 2023 02:07:58 GMT
- Title: Multiple Instance Learning Framework with Masked Hard Instance Mining
for Whole Slide Image Classification
- Authors: Wenhao Tang and Sheng Huang and Xiaoxian Zhang and Fengtao Zhou and Yi
Zhang and Bo Liu
- Abstract summary: Masked hard instance mining (MHIM-MIL) is presented.
MHIM-MIL uses a Siamese structure (Teacher-Student) with a consistency constraint to explore potential hard instances.
Experimental results on the CAMELYON-16 and TCGA Lung Cancer datasets demonstrate that MHIM-MIL outperforms other latest methods in terms of performance and training cost.
- Score: 11.996318969699296
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The whole slide image (WSI) classification is often formulated as a multiple
instance learning (MIL) problem. Since the positive tissue is only a small
fraction of the gigapixel WSI, existing MIL methods intuitively focus on
identifying salient instances via attention mechanisms. However, this leads to
a bias towards easy-to-classify instances while neglecting hard-to-classify
instances. Some literature has revealed that hard examples are beneficial for
modeling a discriminative boundary accurately. By applying such an idea at the
instance level, we elaborate a novel MIL framework with masked hard instance
mining (MHIM-MIL), which uses a Siamese structure (Teacher-Student) with a
consistency constraint to explore the potential hard instances. With several
instance masking strategies based on attention scores, MHIM-MIL employs a
momentum teacher to implicitly mine hard instances for training the student
model, which can be any attention-based MIL model. This counter-intuitive
strategy essentially enables the student to learn a better discriminating
boundary. Moreover, the student is used to update the teacher with an
exponential moving average (EMA), which in turn identifies new hard instances
for subsequent training iterations and stabilizes the optimization.
Experimental results on the CAMELYON-16 and TCGA Lung Cancer datasets
demonstrate that MHIM-MIL outperforms other latest methods in terms of
performance and training cost. The code is available at:
https://github.com/DearCaat/MHIM-MIL.
Related papers
- Attention Is Not What You Need: Revisiting Multi-Instance Learning for Whole Slide Image Classification [51.95824566163554]
We argue that synergizing the standard MIL assumption with variational inference encourages the model to focus on tumour morphology instead of spurious correlations.
Our method also achieves better classification boundaries for identifying hard instances and mitigates the effect of spurious correlations between bags and labels.
arXiv Detail & Related papers (2024-08-18T12:15:22Z) - MamMIL: Multiple Instance Learning for Whole Slide Images with State Space Models [56.37780601189795]
We propose a framework named MamMIL for WSI analysis.
We represent each WSI as an undirected graph.
To address the problem that Mamba can only process 1D sequences, we propose a topology-aware scanning mechanism.
arXiv Detail & Related papers (2024-03-08T09:02:13Z) - Reproducibility in Multiple Instance Learning: A Case For Algorithmic
Unit Tests [59.623267208433255]
Multiple Instance Learning (MIL) is a sub-domain of classification problems with positive and negative labels and a "bag" of inputs.
In this work, we examine five of the most prominent deep-MIL models and find that none of them respects the standard MIL assumption.
We identify and demonstrate this problem via a proposed "algorithmic unit test", where we create synthetic datasets that can be solved by a MIL respecting model.
arXiv Detail & Related papers (2023-10-27T03:05:11Z) - Pseudo-Bag Mixup Augmentation for Multiple Instance Learning-Based Whole
Slide Image Classification [18.679580844360615]
We propose a new Pseudo-bag Mixup (PseMix) data augmentation scheme to improve the training of MIL models.
Our scheme generalizes the Mixup strategy for general images to special WSIs via pseudo-bags.
It is designed as an efficient and decoupled method, neither involving time-consuming operations nor relying on MIL model predictions.
arXiv Detail & Related papers (2023-06-28T13:02:30Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - Attention Awareness Multiple Instance Neural Network [4.061135251278187]
We propose an attention awareness multiple instance neural network framework.
It consists of an instance-level classifier, a trainable MIL pooling operator based on spatial attention and a bag-level classification layer.
Exhaustive experiments on a series of pattern recognition tasks demonstrate that our framework outperforms many state-of-the-art MIL methods.
arXiv Detail & Related papers (2022-05-27T03:29:17Z) - DTFD-MIL: Double-Tier Feature Distillation Multiple Instance Learning
for Histopathology Whole Slide Image Classification [18.11776334311096]
Multiple instance learning (MIL) has been increasingly used in the classification of histopathology whole slide images (WSIs)
We propose to virtually enlarge the number of bags by introducing the concept of pseudo-bags.
We also contribute to deriving the instance probability under the framework of attention-based MIL, and utilize the derivation to help construct and analyze the proposed framework.
arXiv Detail & Related papers (2022-03-22T22:33:42Z) - CIL: Contrastive Instance Learning Framework for Distantly Supervised
Relation Extraction [52.94486705393062]
We go beyond typical multi-instance learning (MIL) framework and propose a novel contrastive instance learning (CIL) framework.
Specifically, we regard the initial MIL as the relational triple encoder and constraint positive pairs against negative pairs for each instance.
Experiments demonstrate the effectiveness of our proposed framework, with significant improvements over the previous methods on NYT10, GDS and KBP.
arXiv Detail & Related papers (2021-06-21T04:51:59Z) - Dual-stream Multiple Instance Learning Network for Whole Slide Image
Classification with Self-supervised Contrastive Learning [16.84711797934138]
We address the challenging problem of whole slide image (WSI) classification.
WSI classification can be cast as a multiple instance learning (MIL) problem when only slide-level labels are available.
We propose a MIL-based method for WSI classification and tumor detection that does not require localized annotations.
arXiv Detail & Related papers (2020-11-17T20:51:15Z) - Dual-stream Maximum Self-attention Multi-instance Learning [11.685285490589981]
Multi-instance learning (MIL) is a form of weakly supervised learning where a single class label is assigned to a bag of instances while the instance-level labels are not available.
We propose a dual-stream maximum self-attention MIL model (DSMIL) parameterized by neural networks.
Our method achieves superior performance compared to the best MIL methods and demonstrates state-of-the-art performance on benchmark MIL datasets.
arXiv Detail & Related papers (2020-06-09T22:44:58Z) - Weakly-Supervised Action Localization with Expectation-Maximization
Multi-Instance Learning [82.41415008107502]
Weakly-supervised action localization requires training a model to localize the action segments in the video given only video level action label.
It can be solved under the Multiple Instance Learning (MIL) framework, where a bag (video) contains multiple instances (action segments)
We show that our EM-MIL approach more accurately models both the learning objective and the MIL assumptions.
arXiv Detail & Related papers (2020-03-31T23:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.