Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning
- URL: http://arxiv.org/abs/2303.01342v1
- Date: Thu, 2 Mar 2023 15:18:58 GMT
- Title: Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning
- Authors: Ario Sadafi, Nassir Navab, Carsten Marr
- Abstract summary: We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
- Score: 48.02011627390706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many histopathology tasks, sample classification depends on morphological
details in tissue or single cells that are only visible at the highest
magnification. For a pathologist, this implies tedious zooming in and out,
while for a computational decision support algorithm, it leads to the analysis
of a huge number of small image patches per whole slide image (WSI).
Attention-based multiple instance learning (MIL), where attention estimation is
learned in a weakly supervised manner, has been successfully applied in
computational histopathology, but it is challenged by large numbers of
irrelevant patches, reducing its accuracy. Here, we present an active learning
approach to the problem. Querying the expert to annotate regions of interest in
a WSI guides the formation of high-attention regions for MIL. We train an
attention-based MIL and calculate a confidence metric for every image in the
dataset to select the most uncertain WSIs for expert annotation. We test our
approach on the CAMELYON17 dataset classifying metastatic lymph node sections
in breast cancer. With a novel attention guiding loss, this leads to an
accuracy boost of the trained models with few regions annotated for each class.
Active learning thus improves WSIs classification accuracy, leads to faster and
more robust convergence, and speeds up the annotation process. It may in the
future serve as an important contribution to train MIL models in the clinically
relevant context of cancer classification in histopathology.
Related papers
- Attention Is Not What You Need: Revisiting Multi-Instance Learning for Whole Slide Image Classification [51.95824566163554]
We argue that synergizing the standard MIL assumption with variational inference encourages the model to focus on tumour morphology instead of spurious correlations.
Our method also achieves better classification boundaries for identifying hard instances and mitigates the effect of spurious correlations between bags and labels.
arXiv Detail & Related papers (2024-08-18T12:15:22Z) - Cross-attention-based saliency inference for predicting cancer
metastasis on whole slide images [3.7282630026096597]
Cross-attention-based salient instance inference MIL (CASiiMIL) is proposed to identify breast cancer lymph node micro-metastasis on whole slide images.
We introduce a negative representation learning algorithm to facilitate the learning of saliency-informed attention weights for improved sensitivity on tumor WSIs.
The proposed model outperforms the state-of-the-art MIL methods on two popular tumor metastasis detection datasets.
arXiv Detail & Related papers (2023-09-18T00:56:19Z) - Context-Aware Self-Supervised Learning of Whole Slide Images [0.0]
A novel two-stage learning technique is presented in this work.
A graph representation capturing all dependencies among regions in the WSI is very intuitive.
The entire slide is presented as a graph, where the nodes correspond to the patches from the WSI.
The proposed framework is then tested using WSIs from prostate and kidney cancers.
arXiv Detail & Related papers (2023-06-07T20:23:05Z) - Dual Attention Model with Reinforcement Learning for Classification of Histology Whole-Slide Images [8.404881822414898]
Digital whole slide images (WSIs) are generally captured at microscopic resolution and encompass extensive spatial data.
We propose a novel dual attention approach, consisting of two main components, both inspired by the visual examination process of a pathologist.
We show that the proposed model achieves performance better than or comparable to the state-of-the-art methods while processing less than 10% of the WSI at the highest magnification.
arXiv Detail & Related papers (2023-02-19T22:26:25Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Label Cleaning Multiple Instance Learning: Refining Coarse Annotations
on Single Whole-Slide Images [83.7047542725469]
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
We present a method, named Label Cleaning Multiple Instance Learning (LC-MIL), to refine coarse annotations on a single WSI without the need of external training data.
Our experiments on a heterogeneous WSI set with breast cancer lymph node metastasis, liver cancer, and colorectal cancer samples show that LC-MIL significantly refines the coarse annotations, outperforming the state-of-the-art alternatives, even while learning from a single slide.
arXiv Detail & Related papers (2021-09-22T15:06:06Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Whole Slide Images based Cancer Survival Prediction using Attention
Guided Deep Multiple Instance Learning Networks [38.39901070720532]
Current image-based survival models that limit to key patches or clusters derived from Whole Slide Images (WSIs)
We propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling.
We evaluated our methods on two large cancer whole slide images datasets and our results suggest that the proposed approach is more effective and suitable for large datasets.
arXiv Detail & Related papers (2020-09-23T14:31:15Z) - Breast Cancer Histopathology Image Classification and Localization using
Multiple Instance Learning [2.4178424543973267]
Computer-aided pathology to analyze microscopic histopathology images for diagnosis can bring the cost and delays of diagnosis down.
Deep learning in histopathology has attracted attention over the last decade of achieving state-of-the-art performance in classification and localization tasks.
We present classification and localization results on two publicly available BreakHIS and BACH dataset.
arXiv Detail & Related papers (2020-02-16T10:29:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.