Multi-Stain Multi-Level Convolutional Network for Multi-Tissue Breast Cancer Image Segmentation
- URL: http://arxiv.org/abs/2406.05828v1
- Date: Sun, 9 Jun 2024 15:35:49 GMT
- Title: Multi-Stain Multi-Level Convolutional Network for Multi-Tissue Breast Cancer Image Segmentation
- Authors: Akash Modi, Sumit Kumar Jha, Purnendu Mishra, Rajiv Kumar, Kiran Aatre, Gursewak Singh, Shubham Mathur,
- Abstract summary: We propose a novel convolutional neural network (CNN) based Multi-class Tissue model for histopathology.
Our model is able to separate bad regions such as folds, artifacts, blurry regions, bubbles, etc. from tissue regions using multi-level context.
Our training pipeline used 12 million patches generated using context-aware augmentations which made our model stain and scanner invariant.
- Score: 5.572436001833252
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Digital pathology and microscopy image analysis are widely employed in the segmentation of digitally scanned IHC slides, primarily to identify cancer and pinpoint regions of interest (ROI) indicative of tumor presence. However, current ROI segmentation models are either stain-specific or suffer from the issues of stain and scanner variance due to different staining protocols or modalities across multiple labs. Also, tissues like Ductal Carcinoma in Situ (DCIS), acini, etc. are often classified as Tumors due to their structural similarities and color compositions. In this paper, we proposed a novel convolutional neural network (CNN) based Multi-class Tissue Segmentation model for histopathology whole-slide Breast slides which classify tumors and segments other tissue regions such as Ducts, acini, DCIS, Squamous epithelium, Blood Vessels, Necrosis, etc. as a separate class. Our unique pixel-aligned non-linear merge across spatial resolutions empowers models with both local and global fields of view for accurate detection of various classes. Our proposed model is also able to separate bad regions such as folds, artifacts, blurry regions, bubbles, etc. from tissue regions using multi-level context from different resolutions of WSI. Multi-phase iterative training with context-aware augmentation and increasing noise was used to efficiently train a multi-stain generic model with partial and noisy annotations from 513 slides. Our training pipeline used 12 million patches generated using context-aware augmentations which made our model stain and scanner invariant across data sources. To extrapolate stain and scanner invariance, our model was evaluated on 23000 patches which were for a completely new stain (Hematoxylin and Eosin) from a completely new scanner (Motic) from a different lab. The mean IOU was 0.72 which is on par with model performance on other data sources and scanners.
Related papers
- A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Omni-Seg: A Single Dynamic Network for Multi-label Renal Pathology Image
Segmentation using Partially Labeled Data [6.528287373027917]
In non-cancer pathology, the learning algorithms can be asked to examine more comprehensive tissue types simultaneously.
The prior arts needed to train multiple segmentation networks in order to match the domain-specific knowledge.
By learning from 150,000 patch-wise pathological images, the proposed Omni-Seg network achieved superior segmentation accuracy and less resource consumption.
arXiv Detail & Related papers (2021-12-23T16:02:03Z) - Automatic Semantic Segmentation of the Lumbar Spine. Clinical
Applicability in a Multi-parametric and Multi-centre MRI study [0.0]
This document describes the topologies and analyses the results of the neural network designs that obtained the most accurate segmentations.
Several of the proposed designs outperform the standard U-Net used as baseline, especially when used in ensembles where the output of multiple neural networks is combined according to different strategies.
arXiv Detail & Related papers (2021-11-16T17:33:05Z) - Assessing domain adaptation techniques for mitosis detection in
multi-scanner breast cancer histopathology images [0.6999740786886536]
We train two mitosis detection models and two style transfer methods and evaluate the usefulness of the latter for improving mitosis detection performance.
The best of these models, U-Net without style transfer, achieved an F1-score of 0.693 on the MIDOG 2021 preliminary test set.
arXiv Detail & Related papers (2021-09-01T16:27:46Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - An End-to-End Breast Tumour Classification Model Using Context-Based
Patch Modelling- A BiLSTM Approach for Image Classification [19.594639581421422]
We have tried to integrate this relationship along with feature-based correlation among the extracted patches from the particular tumorous region.
We trained and tested our model on two datasets, microscopy images and WSI tumour regions.
We found out that BiLSTMs with CNN features have performed much better in modelling patches into an end-to-end Image classification network.
arXiv Detail & Related papers (2021-06-05T10:43:58Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Multi-scale Domain-adversarial Multiple-instance CNN for Cancer Subtype
Classification with Unannotated Histopathological Images [16.02231907106384]
We develop a new CNN-based cancer subtype classification method by effectively combining multiple-instance, domain adversarial, and multi-scale learning frameworks.
The classification performance was significantly better than the standard CNN or other conventional methods, and the accuracy compared favorably with that of standard pathologists.
arXiv Detail & Related papers (2020-01-06T14:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.