CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images
- URL: http://arxiv.org/abs/2305.05314v3
- Date: Thu, 10 Oct 2024 16:13:06 GMT
- Title: CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images
- Authors: Olga Fourkioti, Matt De Vries, Chen Jin, Daniel C. Alexander, Chris Bakal,
- Abstract summary: We propose the Context-Aware Multiple Instance Learning (CAMIL) architecture for cancer diagnosis.
CAMIL incorporates neighbor-constrained attention to consider dependencies among tiles within a Whole Slide Images (WSI) and integrates contextual constraints as prior knowledge.
We evaluate CAMIL on subtyping non-small cell lung cancer (TCGA-NSCLC) and detecting lymph node metastasis, achieving test AUCs of 97.5%, 95.9%, and 88.1%, respectively.
- Score: 3.1118773046912382
- License:
- Abstract: The visual examination of tissue biopsy sections is fundamental for cancer diagnosis, with pathologists analyzing sections at multiple magnifications to discern tumor cells and their subtypes. However, existing attention-based multiple instance learning (MIL) models used for analyzing Whole Slide Images (WSIs) in cancer diagnostics often overlook the contextual information of tumor and neighboring tiles, leading to misclassifications. To address this, we propose the Context-Aware Multiple Instance Learning (CAMIL) architecture. CAMIL incorporates neighbor-constrained attention to consider dependencies among tiles within a WSI and integrates contextual constraints as prior knowledge into the MIL model. We evaluated CAMIL on subtyping non-small cell lung cancer (TCGA-NSCLC) and detecting lymph node (CAMELYON16 and CAMELYON17) metastasis, achieving test AUCs of 97.5\%, 95.9\%, and 88.1\%, respectively, outperforming other state-of-the-art methods. Additionally, CAMIL enhances model interpretability by identifying regions of high diagnostic value.
Related papers
- CARMIL: Context-Aware Regularization on Multiple Instance Learning models for Whole Slide Images [0.41873161228906586]
Multiple Instance Learning models have proven effective for cancer prognosis from Whole Slide Images.
The original MIL formulation incorrectly assumes the patches of the same image to be independent.
We propose a versatile regularization scheme designed to seamlessly integrate spatial knowledge into any MIL model.
arXiv Detail & Related papers (2024-08-01T09:59:57Z) - Finding Regions of Interest in Whole Slide Images Using Multiple Instance Learning [0.23301643766310368]
Whole Slide Images (WSI) represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level.
We propose a weakly supervised Multiple Instance Learning (MIL) approach to accurately predict the overall cancer phenotype.
arXiv Detail & Related papers (2024-04-01T19:33:41Z) - Post-Hoc Explainability of BI-RADS Descriptors in a Multi-task Framework
for Breast Cancer Detection and Segmentation [48.08423125835335]
MT-BI-RADS is a novel explainable deep learning approach for tumor detection in Breast Ultrasound (BUS) images.
It offers three levels of explanations to enable radiologists to comprehend the decision-making process in predicting tumor malignancy.
arXiv Detail & Related papers (2023-08-27T22:07:42Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Selecting Regions of Interest in Large Multi-Scale Images for Cancer
Pathology [0.0]
High resolution scans of microscopy slides offer enough information for a cancer pathologist to come to a conclusion regarding cancer presence, subtype, and severity based on measurements of features within the slide image at multiple scales and resolutions.
We explore approaches based on Reinforcement Learning and Beam Search to learn to progressively zoom into the WSI to detect Regions of Interest (ROIs) in liver pathology slides containing one of two types of liver cancer, namely Hepatocellular Carcinoma (HCC) and Cholangiocarcinoma (CC)
These ROIs can then be presented directly to the pathologist to aid in measurement and diagnosis or be used
arXiv Detail & Related papers (2020-07-03T15:27:41Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z) - Multi-scale Domain-adversarial Multiple-instance CNN for Cancer Subtype
Classification with Unannotated Histopathological Images [16.02231907106384]
We develop a new CNN-based cancer subtype classification method by effectively combining multiple-instance, domain adversarial, and multi-scale learning frameworks.
The classification performance was significantly better than the standard CNN or other conventional methods, and the accuracy compared favorably with that of standard pathologists.
arXiv Detail & Related papers (2020-01-06T14:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.