Cross-scale Attention Guided Multi-instance Learning for Crohn's Disease
Diagnosis with Pathological Images
- URL: http://arxiv.org/abs/2208.07322v1
- Date: Mon, 15 Aug 2022 16:39:34 GMT
- Title: Cross-scale Attention Guided Multi-instance Learning for Crohn's Disease
Diagnosis with Pathological Images
- Authors: Ruining Deng, Can Cui, Lucas W. Remedios, Shunxing Bao, R. Michael
Womick, Sophie Chiron, Jia Li, Joseph T. Roland, Ken S. Lau, Qi Liu, Keith T.
Wilson, Yaohong Wang, Lori A. Coburn, Bennett A. Landman, Yuankai Huo
- Abstract summary: Multi-instance learning (MIL) is widely used in the computer-aided interpretation of pathological Whole Slide Images (WSIs)
We propose a novel cross-scale attention mechanism to explicitly aggregate inter-scale interactions into a single MIL network for Crohn's Disease (CD)
Our approach achieved a superior Area under the Curve (AUC) score of 0.8924 compared with baseline models.
- Score: 22.98849180654734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-instance learning (MIL) is widely used in the computer-aided
interpretation of pathological Whole Slide Images (WSIs) to solve the lack of
pixel-wise or patch-wise annotations. Often, this approach directly applies
"natural image driven" MIL algorithms which overlook the multi-scale (i.e.
pyramidal) nature of WSIs. Off-the-shelf MIL algorithms are typically deployed
on a single-scale of WSIs (e.g., 20x magnification), while human pathologists
usually aggregate the global and local patterns in a multi-scale manner (e.g.,
by zooming in and out between different magnifications). In this study, we
propose a novel cross-scale attention mechanism to explicitly aggregate
inter-scale interactions into a single MIL network for Crohn's Disease (CD),
which is a form of inflammatory bowel disease. The contribution of this paper
is two-fold: (1) a cross-scale attention mechanism is proposed to aggregate
features from different resolutions with multi-scale interaction; and (2)
differential multi-scale attention visualizations are generated to localize
explainable lesion patterns. By training ~250,000 H&E-stained Ascending Colon
(AC) patches from 20 CD patient and 30 healthy control samples at different
scales, our approach achieved a superior Area under the Curve (AUC) score of
0.8924 compared with baseline models. The official implementation is publicly
available at https://github.com/hrlblab/CS-MIL.
Related papers
- CARMIL: Context-Aware Regularization on Multiple Instance Learning models for Whole Slide Images [0.41873161228906586]
Multiple Instance Learning models have proven effective for cancer prognosis from Whole Slide Images.
The original MIL formulation incorrectly assumes the patches of the same image to be independent.
We propose a versatile regularization scheme designed to seamlessly integrate spatial knowledge into any MIL model.
arXiv Detail & Related papers (2024-08-01T09:59:57Z) - Finding Regions of Interest in Whole Slide Images Using Multiple Instance Learning [0.23301643766310368]
Whole Slide Images (WSI) represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level.
We propose a weakly supervised Multiple Instance Learning (MIL) approach to accurately predict the overall cancer phenotype.
arXiv Detail & Related papers (2024-04-01T19:33:41Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - The Whole Pathological Slide Classification via Weakly Supervised
Learning [7.313528558452559]
We introduce two pathological priors: nuclear disease of cells and spatial correlation of pathological tiles.
We propose a data augmentation method that utilizes stain separation during extractor training.
We then describe the spatial relationships between the tiles using an adjacency matrix.
By integrating these two views, we designed a multi-instance framework for analyzing H&E-stained tissue images.
arXiv Detail & Related papers (2023-07-12T16:14:23Z) - Cross-scale Multi-instance Learning for Pathological Image Diagnosis [20.519711186151635]
Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects.
We propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis.
arXiv Detail & Related papers (2023-04-01T03:52:52Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Segmentation of Cellular Patterns in Confocal Images of Melanocytic
Lesions in vivo via a Multiscale Encoder-Decoder Network (MED-Net) [2.0487455621441377]
"Multiscale-Decoder Network (MED-Net)" provides pixel-wise labeling into classes of patterns in a quantitative manner.
We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions.
arXiv Detail & Related papers (2020-01-03T22:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.