OMG-Net: A Deep Learning Framework Deploying Segment Anything to Detect Pan-Cancer Mitotic Figures from Haematoxylin and Eosin-Stained Slides
- URL: http://arxiv.org/abs/2407.12773v1
- Date: Wed, 17 Jul 2024 17:53:37 GMT
- Title: OMG-Net: A Deep Learning Framework Deploying Segment Anything to Detect Pan-Cancer Mitotic Figures from Haematoxylin and Eosin-Stained Slides
- Authors: Zhuoyan Shen, Mikael Simard, Douglas Brand, Vanghelita Andrei, Ali Al-Khader, Fatine Oumlil, Katherine Trevers, Thomas Butters, Simon Haefliger, Eleanna Kara, Fernanda Amary, Roberto Tirabosco, Paul Cool, Gary Royle, Maria A. Hawkins, Adrienne M. Flanagan, Charles-Antoine Collins Fekete,
- Abstract summary: In this study, we propose an artificial intelligence (AI) approach to detect MFs in digitised whole slide images (WSIs)
Here we establish the largest pan-cancer dataset of mitotic figures by combining an in-house dataset of soft tissue tumours (STMF) with five open-source mitotic datasets (IPAC, TUPAC, CCMCT, CMC and MIDOG++)
We then employed a two-stage framework (Optimised Mitoses Generator Network (OMG-Net)) classify MFs.
- Score: 27.84599956781646
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mitotic activity is an important feature for grading several cancer types. Counting mitotic figures (MFs) is a time-consuming, laborious task prone to inter-observer variation. Inaccurate recognition of MFs can lead to incorrect grading and hence potential suboptimal treatment. In this study, we propose an artificial intelligence (AI)-aided approach to detect MFs in digitised haematoxylin and eosin-stained whole slide images (WSIs). Advances in this area are hampered by the limited number and types of cancer datasets of MFs. Here we establish the largest pan-cancer dataset of mitotic figures by combining an in-house dataset of soft tissue tumours (STMF) with five open-source mitotic datasets comprising multiple human cancers and canine specimens (ICPR, TUPAC, CCMCT, CMC and MIDOG++). This new dataset identifies 74,620 MFs and 105,538 mitotic-like figures. We then employed a two-stage framework (the Optimised Mitoses Generator Network (OMG-Net) to classify MFs. The framework first deploys the Segment Anything Model (SAM) to automate the contouring of MFs and surrounding objects. An adapted ResNet18 is subsequently trained to classify MFs. OMG-Net reaches an F1-score of 0.84 on pan-cancer MF detection (breast carcinoma, neuroendocrine tumour and melanoma), largely outperforming the previous state-of-the-art MIDOG++ benchmark model on its hold-out testing set (e.g. +16% F1-score on breast cancer detection, p<0.001) thereby providing superior accuracy in detecting MFs on various types of tumours obtained with different scanners.
Related papers
- A bag of tricks for real-time Mitotic Figure detection [0.0]
We build on the efficient RTMDet single stage object detector to achieve high inference speed suitable for clinical deployment.<n>We employ targeted, hard negative mining on necrotic and debris tissue to reduce false positives.<n>On the preliminary test set of the MItosis DOmain Generalization (MIDOG) 2025 challenge, our single-stage RTMDet-S based approach reaches an F1 of 0.81.
arXiv Detail & Related papers (2025-08-27T11:45:44Z) - HistoART: Histopathology Artifact Detection and Reporting Tool [37.31105955164019]
Whole Slide Imaging (WSI) is widely used to digitize tissue specimens for detailed, high-resolution examination.<n>WSI remains vulnerable to artifacts introduced during slide preparation and scanning.<n>We propose and compare three robust artifact detection approaches for WSIs.
arXiv Detail & Related papers (2025-06-23T17:22:19Z) - Towards a Multimodal MRI-Based Foundation Model for Multi-Level Feature Exploration in Segmentation, Molecular Subtyping, and Grading of Glioma [0.2796197251957244]
Multi-Task S-UNETR (MTSUNET) model is a novel foundation-based framework built on the BrainSegFounder model.
It simultaneously performs glioma segmentation, histological subtyping and neuroimaging subtyping.
It shows significant potential for advancing noninvasive, personalized glioma management by improving predictive accuracy and interpretability.
arXiv Detail & Related papers (2025-03-10T01:27:09Z) - Histologic Dataset of Normal and Atypical Mitotic Figures on Human Breast Cancer (AMi-Br) [0.2786153781225932]
Assessment of the density of mitotic figures (MFs) in histologic tumor sections is an important prognostic marker for many tumor types.
Recently, it has been reported in multiple works that the quantity of MFs with an atypical morphology might be an independent prognostic criterion for breast cancer.
We present the first ever publicly available dataset of atypical and normal MFs (AMi-Br)
arXiv Detail & Related papers (2025-01-08T12:41:42Z) - Exploiting Precision Mapping and Component-Specific Feature Enhancement for Breast Cancer Segmentation and Identification [0.0]
We propose novel Deep Learning (DL) frameworks for breast lesion segmentation and classification.
We introduce a precision mapping mechanism (PMM) for a precision mapping and attention-driven LinkNet (PMAD-LinkNet) segmentation framework.
We also introduce a component-specific feature enhancement module (CSFEM) for a component-specific feature-enhanced classifier (CSFEC-Net)
arXiv Detail & Related papers (2024-07-03T06:40:26Z) - Lung-CADex: Fully automatic Zero-Shot Detection and Classification of Lung Nodules in Thoracic CT Images [45.29301790646322]
Computer-aided diagnosis can help with early lung nodul detection and facilitate subsequent nodule characterization.
We propose CADe, for segmenting lung nodules in a zero-shot manner using a variant of the Segment Anything Model called MedSAM.
We also propose, CADx, a method for the nodule characterization as benign/malignant by making a gallery of radiomic features and aligning image-feature pairs through contrastive learning.
arXiv Detail & Related papers (2024-07-02T19:30:25Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - CIMIL-CRC: a clinically-informed multiple instance learning framework for patient-level colorectal cancer molecular subtypes classification from H\&E stained images [42.771819949806655]
We introduce CIMIL-CRC', a framework that solves the MSI/MSS MIL problem by efficiently combining a pre-trained feature extraction model with principal component analysis (PCA) to aggregate information from all patches.
We assessed our CIMIL-CRC method using the average area under the curve (AUC) from a 5-fold cross-validation experimental setup for model development on the TCGA-CRC-DX cohort.
arXiv Detail & Related papers (2024-01-29T12:56:11Z) - Integrative Imaging Informatics for Cancer Research: Workflow Automation
for Neuro-oncology (I3CR-WANO) [0.12175619840081271]
We propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-Oncology MRI data.
Our end-to-end framework i) classifies MRI sequences using an ensemble classifier, ii) preprocesses the data in a reproducible manner, and iv) delineates tumor tissue subtypes.
It is robust to missing sequences and adopts an expert-in-the-loop approach, where the segmentation results may be manually refined by radiologists.
arXiv Detail & Related papers (2022-10-06T18:23:42Z) - Deep learning-based approach to reveal tumor mutational burden status
from whole slide images across multiple cancer types [41.61294299606317]
Tumor mutational burden (TMB) is a potential genomic biomarker of immunotherapy.
TMB detected through whole exome sequencing lacks clinical penetration in low-resource settings.
In this study, we proposed a multi-scale deep learning framework to address the detection of TMB status from routinely used whole slide images.
arXiv Detail & Related papers (2022-04-07T07:02:32Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - A completely annotated whole slide image dataset of canine breast cancer
to aid human breast cancer research [6.960375869417005]
Current datasets on human breast cancer only provide annotations for small subsets of whole slide images (WSIs)
We present a novel dataset of 21 WSIs of CMC completely annotated for MF.
We used machine learning to identify previously undetected MF.
arXiv Detail & Related papers (2020-08-24T08:06:55Z) - Synthesizing lesions using contextual GANs improves breast cancer
classification on mammograms [0.4297070083645048]
We present a novel generative adversarial network (GAN) model for data augmentation that can realistically synthesize and remove lesions on mammograms.
With self-attention and semi-supervised learning components, the U-net-based architecture can generate high resolution (256x256px) outputs.
arXiv Detail & Related papers (2020-05-29T21:23:00Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z) - Deep Feature Fusion for Mitosis Counting [0.0]
The mitotic cell count is one of the most common tests to assess the aggressiveness or grade of breast cancer.
Deep learning networks have been adapted to medical applications which are able to automatically localize regions of interest.
A proposed method leverages Faster RCNN for object detection while fusing segmentation features generated by a UNet with RGB image features to achieve an F-score of 0.508 on the MITOS-ATYPIA 2014 mitosis counting challenge dataset.
arXiv Detail & Related papers (2020-02-01T20:20:00Z) - Segmentation of Cellular Patterns in Confocal Images of Melanocytic
Lesions in vivo via a Multiscale Encoder-Decoder Network (MED-Net) [2.0487455621441377]
"Multiscale-Decoder Network (MED-Net)" provides pixel-wise labeling into classes of patterns in a quantitative manner.
We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions.
arXiv Detail & Related papers (2020-01-03T22:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.