CorrSigNet: Learning CORRelated Prostate Cancer SIGnatures from
Radiology and Pathology Images for Improved Computer Aided Diagnosis
- URL: http://arxiv.org/abs/2008.00119v1
- Date: Fri, 31 Jul 2020 23:44:25 GMT
- Title: CorrSigNet: Learning CORRelated Prostate Cancer SIGnatures from
Radiology and Pathology Images for Improved Computer Aided Diagnosis
- Authors: Indrani Bhattacharya and Arun Seetharaman and Wei Shao and Rewa Sood
and Christian A. Kunder and Richard E. Fan and Simon John Christoph Soerensen
and Jeffrey B. Wang and Pejman Ghanouni and Nikola C. Teslovich and James D.
Brooks and Geoffrey A. Sonn and Mirabela Rusu
- Abstract summary: We propose CorrSigNet, an automated two-step model that localizes prostate cancer on MRI.
First, the model learns MRI signatures of cancer that are correlated with corresponding histopathology features.
Second, the model uses the learned correlated MRI features to train a Convolutional Neural Network to localize prostate cancer.
- Score: 1.63324350193061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic Resonance Imaging (MRI) is widely used for screening and staging
prostate cancer. However, many prostate cancers have subtle features which are
not easily identifiable on MRI, resulting in missed diagnoses and alarming
variability in radiologist interpretation. Machine learning models have been
developed in an effort to improve cancer identification, but current models
localize cancer using MRI-derived features, while failing to consider the
disease pathology characteristics observed on resected tissue. In this paper,
we propose CorrSigNet, an automated two-step model that localizes prostate
cancer on MRI by capturing the pathology features of cancer. First, the model
learns MRI signatures of cancer that are correlated with corresponding
histopathology features using Common Representation Learning. Second, the model
uses the learned correlated MRI features to train a Convolutional Neural
Network to localize prostate cancer. The histopathology images are used only in
the first step to learn the correlated features. Once learned, these correlated
features can be extracted from MRI of new patients (without histopathology or
surgery) to localize cancer. We trained and validated our framework on a unique
dataset of 75 patients with 806 slices who underwent MRI followed by
prostatectomy surgery. We tested our method on an independent test set of 20
prostatectomy patients (139 slices, 24 cancerous lesions, 1.12M pixels) and
achieved a per-pixel sensitivity of 0.81, specificity of 0.71, AUC of 0.86 and
a per-lesion AUC of $0.96 \pm 0.07$, outperforming the current state-of-the-art
accuracy in predicting prostate cancer using MRI.
Related papers
- Enhancing Trust in Clinically Significant Prostate Cancer Prediction with Multiple Magnetic Resonance Imaging Modalities [61.36288157482697]
In the United States, prostate cancer is the second leading cause of deaths in males with a predicted 35,250 deaths in 2024.
In this paper, we investigate combining multiple MRI modalities to train a deep learning model to enhance trust in the models for clinically significant prostate cancer prediction.
arXiv Detail & Related papers (2024-11-07T12:48:27Z) - Towards Non-invasive and Personalized Management of Breast Cancer Patients from Multiparametric MRI via A Large Mixture-of-Modality-Experts Model [19.252851972152957]
We report a mixture-of-modality-experts model (MOME) that integrates multiparametric MRI information within a unified structure.
MOME demonstrated accurate and robust identification of breast cancer.
It could reduce the need for biopsies in BI-RADS 4 patients with a ratio of 7.3%, classify triple-negative breast cancer with an AUROC of 0.709, and predict pathological complete response to neoadjuvant chemotherapy with an AUROC of 0.694.
arXiv Detail & Related papers (2024-08-08T05:04:13Z) - Improving Breast Cancer Grade Prediction with Multiparametric MRI Created Using Optimized Synthetic Correlated Diffusion Imaging [71.91773485443125]
Grading plays a vital role in breast cancer treatment planning.
The current tumor grading method involves extracting tissue from patients, leading to stress, discomfort, and high medical costs.
This paper examines using optimized CDI$s$ to improve breast cancer grade prediction.
arXiv Detail & Related papers (2024-05-13T15:48:26Z) - Cancer-Net BCa-S: Breast Cancer Grade Prediction using Volumetric Deep
Radiomic Features from Synthetic Correlated Diffusion Imaging [82.74877848011798]
The prevalence of breast cancer continues to grow, affecting about 300,000 females in the United States in 2023.
The gold-standard Scarff-Bloom-Richardson (SBR) grade has been shown to consistently indicate a patient's response to chemotherapy.
In this paper, we study the efficacy of deep learning for breast cancer grading based on synthetic correlated diffusion (CDI$s$) imaging.
arXiv Detail & Related papers (2023-04-12T15:08:34Z) - A Multi-Institutional Open-Source Benchmark Dataset for Breast Cancer
Clinical Decision Support using Synthetic Correlated Diffusion Imaging Data [82.74877848011798]
Cancer-Net BCa is a multi-institutional open-source benchmark dataset of volumetric CDI$s$ imaging data of breast cancer patients.
Cancer-Net BCa is publicly available as a part of a global open-source initiative dedicated to accelerating advancement in machine learning to aid clinicians in the fight against cancer.
arXiv Detail & Related papers (2023-04-12T05:41:44Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Implementation of Convolutional Neural Network Architecture on 3D
Multiparametric Magnetic Resonance Imaging for Prostate Cancer Diagnosis [0.0]
We propose a novel deep learning approach for automatic classification of prostate lesions in magnetic resonance images.
Our framework achieved the classification performance with the area under a Receiver Operating Characteristic curve value of 0.87.
Our proposed framework reflects the potential of assisting medical image interpretation in prostate cancer and reducing unnecessary biopsies.
arXiv Detail & Related papers (2021-12-29T16:47:52Z) - Automatic tumour segmentation in H&E-stained whole-slide images of the
pancreas [2.4431235585344475]
We propose a multi-task convolutional neural network to balance disease detection and segmentation accuracy.
We validated our approach on a dataset of 29 patients at different resolutions.
arXiv Detail & Related papers (2021-12-01T22:05:15Z) - A transformer-based deep learning approach for classifying brain
metastases into primary organ sites using clinical whole brain MRI images [4.263008461907835]
The treatment decisions for brain metastatic disease are driven by knowledge of the primary organ site cancer histology.
The use of clinical whole-brain data and the end-to-end pipeline obviate external human intervention.
It is convincingly established that whole-brain imaging features would be sufficiently discriminative to allow accurate diagnosis of the primary organ site of malignancy.
arXiv Detail & Related papers (2021-10-07T16:10:44Z) - Deep Convolutional Neural Networks Model-based Brain Tumor Detection in
Brain MRI Images [0.0]
Our work involves implementing a deep convolutional neural network (DCNN) for diagnosing brain tumors from MR images.
Our model can single out the MR images with tumors with an overall accuracy of 96%.
arXiv Detail & Related papers (2020-10-03T07:42:17Z) - Segmentation for Classification of Screening Pancreatic Neuroendocrine
Tumors [72.65802386845002]
This work presents comprehensive results to detect in the early stage the pancreatic neuroendocrine tumors (PNETs) in abdominal CT scans.
To the best of our knowledge, this task has not been studied before as a computational task.
Our approach outperforms state-of-the-art segmentation networks and achieves a sensitivity of $89.47%$ at a specificity of $81.08%$.
arXiv Detail & Related papers (2020-04-04T21:21:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.