Prostate-Specific Foundation Models for Enhanced Detection of Clinically Significant Cancer
- URL: http://arxiv.org/abs/2502.00366v2
- Date: Tue, 04 Feb 2025 17:00:43 GMT
- Title: Prostate-Specific Foundation Models for Enhanced Detection of Clinically Significant Cancer
- Authors: Jeong Hoon Lee, Cynthia Xinran Li, Hassan Jahanandish, Indrani Bhattacharya, Sulaiman Vesal, Lichun Zhang, Shengtian Sang, Moon Hyung Choi, Simon John Christoph Soerensen, Steve Ran Zhou, Elijah Richard Sommer, Richard Fan, Pejman Ghanouni, Yuze Song, Tyler M. Seibert, Geoffrey A. Sonn, Mirabela Rusu,
- Abstract summary: Even when using MRI, radiologists exhibit low specificity and significant inter-observer variability.
Here we present prostate vision contrastive network (ProViCNet)
ProViCNet was trained and validated using 4,401 patients across six institutions.
- Score: 2.546403115506584
- License:
- Abstract: Accurate prostate cancer diagnosis remains challenging. Even when using MRI, radiologists exhibit low specificity and significant inter-observer variability, leading to potential delays or inaccuracies in identifying clinically significant cancers. This leads to numerous unnecessary biopsies and risks of missing clinically significant cancers. Here we present prostate vision contrastive network (ProViCNet), prostate organ-specific vision foundation models for Magnetic Resonance Imaging (MRI) and Trans-Rectal Ultrasound imaging (TRUS) for comprehensive cancer detection. ProViCNet was trained and validated using 4,401 patients across six institutions, as a prostate cancer detection model on radiology images relying on patch-level contrastive learning guided by biopsy confirmed radiologist annotations. ProViCNet demonstrated consistent performance across multiple internal and external validation cohorts with area under the receiver operating curve values ranging from 0.875 to 0.966, significantly outperforming radiologists in the reader study (0.907 versus 0.805, p<0.001) for mpMRI, while achieving 0.670 to 0.740 for TRUS. We also integrated ProViCNet with standard PSA to develop a virtual screening test, and we showed that we can maintain the high sensitivity for detecting clinically significant cancers while more than doubling specificity from 15% to 38% (p<0.001), thereby substantially reducing unnecessary biopsies. These findings highlight that ProViCNet's potential for enhancing prostate cancer diagnosis accuracy and reduce unnecessary biopsies, thereby optimizing diagnostic pathways.
Related papers
- Cancer-Net PCa-Seg: Benchmarking Deep Learning Models for Prostate Cancer Segmentation Using Synthetic Correlated Diffusion Imaging [65.83291923029985]
Prostate cancer (PCa) is the most prevalent cancer among men in the United States, accounting for nearly 300,000 cases, 29% of all diagnoses and 35,000 total deaths in 2024.
Traditional screening methods such as prostate-specific antigen (PSA) testing and magnetic resonance imaging (MRI) have been pivotal in diagnosis, but have faced limitations in specificity and generalizability.
We employ several state-of-the-art deep learning models, including U-Net, SegResNet, Swin UNETR, Attention U-Net, and LightM-UNet, to segment PCa lesions from a 200 CDI$
arXiv Detail & Related papers (2025-01-15T22:23:41Z) - Mask Enhanced Deeply Supervised Prostate Cancer Detection on B-mode Micro-Ultrasound [3.716493093803398]
Prostate cancer is a leading cause of cancer-related deaths among men.
Recent development of high frequency, micro-ultrasound imaging offers improved resolution compared to conventional ultrasound.
Features of prostate cancer remain subtle, with ambiguous borders with normal tissue and large variations in appearance.
We have presented a novel approach to automatically detect and segment clinically significant prostate cancer on B-mode micro-ultrasound images.
arXiv Detail & Related papers (2024-12-14T23:40:53Z) - Enhancing Trust in Clinically Significant Prostate Cancer Prediction with Multiple Magnetic Resonance Imaging Modalities [61.36288157482697]
In the United States, prostate cancer is the second leading cause of deaths in males with a predicted 35,250 deaths in 2024.
In this paper, we investigate combining multiple MRI modalities to train a deep learning model to enhance trust in the models for clinically significant prostate cancer prediction.
arXiv Detail & Related papers (2024-11-07T12:48:27Z) - AI-assisted prostate cancer detection and localisation on biparametric MR by classifying radiologist-positives [5.75804178993065]
We propose to develop deep learning models that improve the overall cancer diagnostic accuracy.
We develop a single voxel-level classification model, with a simple percentage threshold to determine positive cases.
Based on the presented experiments from two clinical data sets, we show that the proposed strategy can improve the diagnostic accuracy.
arXiv Detail & Related papers (2024-10-30T14:59:57Z) - Optimizing Synthetic Correlated Diffusion Imaging for Breast Cancer Tumour Delineation [71.91773485443125]
We show that the best AUC is achieved by the CDI$s$ - optimized modality, outperforming the best gold-standard modality by 0.0044.
Notably, the optimized CDI$s$ modality also achieves AUC values over 0.02 higher than the Unoptimized CDI$s$ value.
arXiv Detail & Related papers (2024-05-13T16:07:58Z) - Enhancing Clinically Significant Prostate Cancer Prediction in T2-weighted Images through Transfer Learning from Breast Cancer [71.91773485443125]
Transfer learning is a technique that leverages acquired features from a domain with richer data to enhance the performance of a domain with limited data.
In this paper, we investigate the improvement of clinically significant prostate cancer prediction in T2-weighted images through transfer learning from breast cancer.
arXiv Detail & Related papers (2024-05-13T15:57:27Z) - ProsDectNet: Bridging the Gap in Prostate Cancer Detection via
Transrectal B-mode Ultrasound Imaging [2.6024562346319167]
ProsDectNet is a multi-task deep learning approach that localizes prostate cancer on B-mode ultrasound.
We trained and validated ProsDectNet using a cohort of 289 patients who underwent MRI-TRUS fusion targeted biopsy.
Our results demonstrate that ProsDectNet has the potential to be used as a computer-aided diagnosis system.
arXiv Detail & Related papers (2023-12-08T19:40:35Z) - Multi-Scale Hybrid Vision Transformer for Learning Gastric Histology:
AI-Based Decision Support System for Gastric Cancer Treatment [50.89811515036067]
Gastric endoscopic screening is an effective way to decide appropriate gastric cancer (GC) treatment at an early stage, reducing GC-associated mortality rate.
We propose a practical AI system that enables five subclassifications of GC pathology, which can be directly matched to general GC treatment guidance.
arXiv Detail & Related papers (2022-02-17T08:33:52Z) - Using deep learning to detect patients at risk for prostate cancer
despite benign biopsies [0.7739635712759623]
We developed and validated a deep convolutional neural network model to distinguish between morphological patterns in benign prostate biopsy whole slide images.
The proposed model has the potential to reduce the number of false negative cases in routine systematic prostate biopsies.
arXiv Detail & Related papers (2021-06-27T15:21:33Z) - CorrSigNet: Learning CORRelated Prostate Cancer SIGnatures from
Radiology and Pathology Images for Improved Computer Aided Diagnosis [1.63324350193061]
We propose CorrSigNet, an automated two-step model that localizes prostate cancer on MRI.
First, the model learns MRI signatures of cancer that are correlated with corresponding histopathology features.
Second, the model uses the learned correlated MRI features to train a Convolutional Neural Network to localize prostate cancer.
arXiv Detail & Related papers (2020-07-31T23:44:25Z) - Segmentation for Classification of Screening Pancreatic Neuroendocrine
Tumors [72.65802386845002]
This work presents comprehensive results to detect in the early stage the pancreatic neuroendocrine tumors (PNETs) in abdominal CT scans.
To the best of our knowledge, this task has not been studied before as a computational task.
Our approach outperforms state-of-the-art segmentation networks and achieves a sensitivity of $89.47%$ at a specificity of $81.08%$.
arXiv Detail & Related papers (2020-04-04T21:21:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.