Vision Transformer-Based Deep Learning for Histologic Classification of Endometrial Cancer
- URL: http://arxiv.org/abs/2312.08479v2
- Date: Wed, 27 Mar 2024 15:38:27 GMT
- Title: Vision Transformer-Based Deep Learning for Histologic Classification of Endometrial Cancer
- Authors: Manu Goyal, Laura J. Tafe, James X. Feng, Kristen E. Muller, Liesbeth Hondelink, Jessica L. Bentz, Saeed Hassanpour,
- Abstract summary: Endometrial cancer, the fourth most common cancer in females in the United States, with the lifetime risk for developing this disease is approximately 2.8% in women.
This study introduces EndoNet, which uses convolutional neural networks for extracting histologic features and classifying slides based on their visual characteristics into high- and low-grade.
The model was trained on 929 digitized hematoxylin and eosin-stained whole-slide images of endometrial cancer from hysterectomy cases at Dartmouth-Health.
- Score: 0.7228984887091693
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Endometrial cancer, the fourth most common cancer in females in the United States, with the lifetime risk for developing this disease is approximately 2.8% in women. Precise histologic evaluation and molecular classification of endometrial cancer is important for effective patient management and determining the best treatment modalities. This study introduces EndoNet, which uses convolutional neural networks for extracting histologic features and a vision transformer for aggregating these features and classifying slides based on their visual characteristics into high- and low- grade. The model was trained on 929 digitized hematoxylin and eosin-stained whole-slide images of endometrial cancer from hysterectomy cases at Dartmouth-Health. It classifies these slides into low-grade (Endometroid Grades 1 and 2) and high-grade (endometroid carcinoma FIGO grade 3, uterine serous carcinoma, carcinosarcoma) categories. EndoNet was evaluated on an internal test set of 110 patients and an external test set of 100 patients from the public TCGA database. The model achieved a weighted average F1-score of 0.91 (95% CI: 0.86-0.95) and an AUC of 0.95 (95% CI: 0.89-0.99) on the internal test, and 0.86 (95% CI: 0.80-0.94) for F1-score and 0.86 (95% CI: 0.75-0.93) for AUC on the external test. Pending further validation, EndoNet has the potential to support pathologists without the need of manual annotations in classifying the grades of gynecologic pathology tumors.
Related papers
- Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Detection of subclinical atherosclerosis by image-based deep learning on chest x-ray [86.38767955626179]
Deep-learning algorithm to predict coronary artery calcium (CAC) score was developed on 460 chest x-ray.
The diagnostic accuracy of the AICAC model assessed by the area under the curve (AUC) was the primary outcome.
arXiv Detail & Related papers (2024-03-27T16:56:14Z) - CIMIL-CRC: a clinically-informed multiple instance learning framework
for patient-level colorectal cancer molecular subtypes classification from
H\&E stained images [45.32169712547367]
We introduce CIMIL-CRC', a framework that solves the MSI/MSS MIL problem by efficiently combining a pre-trained feature extraction model with principal component analysis (PCA) to aggregate information from all patches.
We assessed our CIMIL-CRC method using the average area under the curve (AUC) from a 5-fold cross-validation experimental setup for model development on the TCGA-CRC-DX cohort.
arXiv Detail & Related papers (2024-01-29T12:56:11Z) - Prediction of Breast Cancer Recurrence Risk Using a Multi-Model Approach
Integrating Whole Slide Imaging and Clinicopathologic Features [0.6679306163028237]
The aim of this study was to develop a multi-model approach integrating the analysis of whole slide images and clinicopathologic data to predict associated breast cancer recurrence risks.
The proposed novel methodology uses convolutional neural networks for feature extraction and vision transformers for contextual aggregation.
arXiv Detail & Related papers (2024-01-28T23:33:56Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Deep-Learning Tool for Early Identifying Non-Traumatic Intracranial
Hemorrhage Etiology based on CT Scan [40.51754649947294]
The deep learning model was developed with 1868 eligible NCCT scans with non-traumatic ICH collected between January 2011 and April 2018.
The model's diagnostic performance was compared with clinicians's performance.
The clinicians achieve significant improvements in the sensitivity, specificity, and accuracy of diagnoses of certain hemorrhage etiologies with proposed system augmentation.
arXiv Detail & Related papers (2023-02-02T08:45:17Z) - Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using
Deep Learning on Primary Tumor Biopsy Slides [17.564585510792227]
We developed a deep learning (DL)-based primary tumor biopsy signature for predicting axillary lymph node (ALN) metastasis.
A total of 1,058 EBC patients with pathologically confirmed ALN status were enrolled from May 2010 to August 2020.
arXiv Detail & Related papers (2021-12-04T02:23:18Z) - Using deep learning to detect patients at risk for prostate cancer
despite benign biopsies [0.7739635712759623]
We developed and validated a deep convolutional neural network model to distinguish between morphological patterns in benign prostate biopsy whole slide images.
The proposed model has the potential to reduce the number of false negative cases in routine systematic prostate biopsies.
arXiv Detail & Related papers (2021-06-27T15:21:33Z) - Deep Learning for fully automatic detection, segmentation, and Gleason
Grade estimation of prostate cancer in multiparametric Magnetic Resonance
Images [0.731365367571807]
This paper proposes a fully automatic system based on Deep Learning that takes a prostate mpMRI from a PCa-suspect patient.
It locates PCa lesions, segments them, and predicts their most likely Gleason grade group (GGG)
The code for the ProstateX-trained system has been made openly available at https://github.com/OscarPellicer/prostate_lesion_detection.
arXiv Detail & Related papers (2021-03-23T16:08:43Z) - Comparisons of Graph Neural Networks on Cancer Classification Leveraging
a Joint of Phenotypic and Genetic Features [7.381190270069632]
We evaluated variousgraph neural networks (GNNs) leveraging a joint of phenotypic and genetic features for cancer typeclassification.
Among GNNs, ChebNet, GraphSAGE, and TAGCN showed the best performance, while GATshowed the worst.
arXiv Detail & Related papers (2021-01-14T20:53:49Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.