Mimicking a Pathologist: Dual Attention Model for Scoring of Gigapixel
Histology Images
- URL: http://arxiv.org/abs/2302.09682v1
- Date: Sun, 19 Feb 2023 22:26:25 GMT
- Title: Mimicking a Pathologist: Dual Attention Model for Scoring of Gigapixel
Histology Images
- Authors: Manahil Raza, Ruqayya Awan, Raja Muhammad Saad Bashir, Talha Qaiser,
Nasir M. Rajpoot
- Abstract summary: We propose a novel dual attention approach, consisting of two main components, to mimic visual examination by a pathologist.
We employ our proposed model on two different IHC use cases: HER2 prediction on breast cancer and prediction of Intact/Loss status of two MMR biomarkers, for colorectal cancer.
- Score: 12.53157021039492
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Some major challenges associated with the automated processing of whole slide
images (WSIs) includes their sheer size, different magnification levels and
high resolution. Utilizing these images directly in AI frameworks is
computationally expensive due to memory constraints, while downsampling WSIs
incurs information loss and splitting WSIs into tiles and patches results in
loss of important contextual information. We propose a novel dual attention
approach, consisting of two main components, to mimic visual examination by a
pathologist. The first component is a soft attention model which takes as input
a high-level view of the WSI to determine various regions of interest. We
employ a custom sampling method to extract diverse and spatially distinct image
tiles from selected high attention areas. The second component is a hard
attention classification model, which further extracts a sequence of
multi-resolution glimpses from each tile for classification. Since hard
attention is non-differentiable, we train this component using reinforcement
learning and predict the location of glimpses without processing all patches of
a given tile, thereby aligning with pathologist's way of diagnosis. We train
our components both separately and in an end-to-end fashion using a joint loss
function to demonstrate the efficacy of our proposed model. We employ our
proposed model on two different IHC use cases: HER2 prediction on breast cancer
and prediction of Intact/Loss status of two MMR biomarkers, for colorectal
cancer. We show that the proposed model achieves accuracy comparable to
state-of-the-art methods while only processing a small fraction of the WSI at
highest magnification.
Related papers
- Dual-Image Enhanced CLIP for Zero-Shot Anomaly Detection [58.228940066769596]
We introduce a Dual-Image Enhanced CLIP approach, leveraging a joint vision-language scoring system.
Our methods process pairs of images, utilizing each as a visual reference for the other, thereby enriching the inference process with visual context.
Our approach significantly exploits the potential of vision-language joint anomaly detection and demonstrates comparable performance with current SOTA methods across various datasets.
arXiv Detail & Related papers (2024-05-08T03:13:20Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - The Whole Pathological Slide Classification via Weakly Supervised
Learning [7.313528558452559]
We introduce two pathological priors: nuclear disease of cells and spatial correlation of pathological tiles.
We propose a data augmentation method that utilizes stain separation during extractor training.
We then describe the spatial relationships between the tiles using an adjacency matrix.
By integrating these two views, we designed a multi-instance framework for analyzing H&E-stained tissue images.
arXiv Detail & Related papers (2023-07-12T16:14:23Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Hard Exudate Segmentation Supplemented by Super-Resolution with
Multi-scale Attention Fusion Module [14.021944194533644]
Hard exudates (HE) is the most specific biomarker for retina edema.
This paper proposes a novel hard exudates segmentation method named SS-MAF with an auxiliary super-resolution task.
We evaluate our method on two public lesion datasets, IDRiD and E-Ophtha.
arXiv Detail & Related papers (2022-11-17T08:25:04Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - An End-to-End Breast Tumour Classification Model Using Context-Based
Patch Modelling- A BiLSTM Approach for Image Classification [19.594639581421422]
We have tried to integrate this relationship along with feature-based correlation among the extracted patches from the particular tumorous region.
We trained and tested our model on two datasets, microscopy images and WSI tumour regions.
We found out that BiLSTMs with CNN features have performed much better in modelling patches into an end-to-end Image classification network.
arXiv Detail & Related papers (2021-06-05T10:43:58Z) - Microscopic fine-grained instance classification through deep attention [7.50282814989294]
Fine-grained classification of microscopic image data with limited samples is an open problem in computer vision and biomedical imaging.
We propose a simple yet effective deep network that performs two tasks simultaneously in an end-to-end manner.
The result is a robust but lightweight end-to-end trainable deep network that yields state-of-the-art results.
arXiv Detail & Related papers (2020-10-06T15:29:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.