Histopathological Image Classification based on Self-Supervised Vision
Transformer and Weak Labels
- URL: http://arxiv.org/abs/2210.09021v2
- Date: Tue, 18 Apr 2023 01:16:30 GMT
- Title: Histopathological Image Classification based on Self-Supervised Vision
Transformer and Weak Labels
- Authors: Ahmet Gokberk Gul, Oezdemir Cetin, Christoph Reich, Tim Prangemeier,
Nadine Flinner, Heinz Koeppl
- Abstract summary: We propose Self-ViT-MIL, a novel approach for classifying and localizing cancerous areas based on slide-level annotations.
Self-ViT-MIL surpasses existing state-of-the-art MIL-based approaches in terms of accuracy and area under the curve.
- Score: 16.865729758055448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whole Slide Image (WSI) analysis is a powerful method to facilitate the
diagnosis of cancer in tissue samples. Automating this diagnosis poses various
issues, most notably caused by the immense image resolution and limited
annotations. WSIs commonly exhibit resolutions of 100Kx100K pixels. Annotating
cancerous areas in WSIs on the pixel level is prohibitively labor-intensive and
requires a high level of expert knowledge. Multiple instance learning (MIL)
alleviates the need for expensive pixel-level annotations. In MIL, learning is
performed on slide-level labels, in which a pathologist provides information
about whether a slide includes cancerous tissue. Here, we propose Self-ViT-MIL,
a novel approach for classifying and localizing cancerous areas based on
slide-level annotations, eliminating the need for pixel-wise annotated training
data. Self-ViT- MIL is pre-trained in a self-supervised setting to learn rich
feature representation without relying on any labels. The recent Vision
Transformer (ViT) architecture builds the feature extractor of Self-ViT-MIL.
For localizing cancerous regions, a MIL aggregator with global attention is
utilized. To the best of our knowledge, Self-ViT- MIL is the first approach to
introduce self-supervised ViTs in MIL-based WSI analysis tasks. We showcase the
effectiveness of our approach on the common Camelyon16 dataset. Self-ViT-MIL
surpasses existing state-of-the-art MIL-based approaches in terms of accuracy
and area under the curve (AUC).
Related papers
- MicroMIL: Graph-based Contextual Multiple Instance Learning for Patient Diagnosis Using Microscopy Images [2.324913904215885]
Whole-slide images (WSIs) produced by scanners with weakly-supervised multiple instance learning (MIL) are costly, memory-intensive, and require extensive analysis time.
We introduce MicroMIL, a weakly-supervised MIL framework specifically built to address these challenges.
Graph edges are constructed from the upper triangular similarity matrix, with nodes connected to their most similar neighbors, and a graph neural network (GNN) is utilized to capture contextual information.
arXiv Detail & Related papers (2024-07-31T13:38:47Z) - A self-supervised framework for learning whole slide representations [52.774822784847565]
We present Slide Pre-trained Transformers (SPT) for gigapixel-scale self-supervision of whole slide images.
We benchmark SPT visual representations on five diagnostic tasks across three biomedical microscopy datasets.
arXiv Detail & Related papers (2024-02-09T05:05:28Z) - Multi-level Multiple Instance Learning with Transformer for Whole Slide
Image Classification [32.43847786719133]
Whole slide image (WSI) refers to a type of high-resolution scanned tissue image, which is extensively employed in computer-assisted diagnosis (CAD)
We propose a Multi-level MIL (MMIL) scheme by introducing a hierarchical structure to MIL, which enables efficient handling of MIL tasks involving a large number of instances.
Based on MMIL, we instantiated MMIL-Transformer, an efficient Transformer model with windowed exact self-attention for large-scale MIL tasks.
arXiv Detail & Related papers (2023-06-08T08:29:10Z) - TPMIL: Trainable Prototype Enhanced Multiple Instance Learning for Whole
Slide Image Classification [13.195971707693365]
We develop a Trainable Prototype enhanced deep MIL framework for weakly supervised WSI classification.
Our method is able to reveal the correlations between different tumor subtypes through distances between corresponding trained prototypes.
We test our method on two WSI datasets and it achieves a new SOTA.
arXiv Detail & Related papers (2023-05-01T07:39:19Z) - Cross-scale Multi-instance Learning for Pathological Image Diagnosis [20.519711186151635]
Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects.
We propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis.
arXiv Detail & Related papers (2023-04-01T03:52:52Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Label Cleaning Multiple Instance Learning: Refining Coarse Annotations
on Single Whole-Slide Images [83.7047542725469]
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
We present a method, named Label Cleaning Multiple Instance Learning (LC-MIL), to refine coarse annotations on a single WSI without the need of external training data.
Our experiments on a heterogeneous WSI set with breast cancer lymph node metastasis, liver cancer, and colorectal cancer samples show that LC-MIL significantly refines the coarse annotations, outperforming the state-of-the-art alternatives, even while learning from a single slide.
arXiv Detail & Related papers (2021-09-22T15:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.