A Cascaded Residual UNET for Fully Automated Segmentation of Prostate
and Peripheral Zone in T2-weighted 3D Fast Spin Echo Images
- URL: http://arxiv.org/abs/2012.13501v1
- Date: Fri, 25 Dec 2020 03:16:52 GMT
- Title: A Cascaded Residual UNET for Fully Automated Segmentation of Prostate
and Peripheral Zone in T2-weighted 3D Fast Spin Echo Images
- Authors: Lavanya Umapathy, Wyatt Unger, Faryal Shareef, Hina Arif, Diego
Martin, Maria Altbach, and Ali Bilgin
- Abstract summary: Multi-parametric MR images have been shown to be effective in the non-invasive diagnosis of prostate cancer.
We propose a fully automated cascaded deep learning architecture with residual blocks, Cascaded MRes-UNET, for segmentation of the prostate gland and the peripheral zone.
- Score: 1.6710577107094644
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multi-parametric MR images have been shown to be effective in the
non-invasive diagnosis of prostate cancer. Automated segmentation of the
prostate eliminates the need for manual annotation by a radiologist which is
time consuming. This improves efficiency in the extraction of imaging features
for the characterization of prostate tissues. In this work, we propose a fully
automated cascaded deep learning architecture with residual blocks, Cascaded
MRes-UNET, for segmentation of the prostate gland and the peripheral zone in
one pass through the network. The network yields high Dice scores
($0.91\pm.02$), precision ($0.91\pm.04$), and recall scores ($0.92\pm.03$) in
prostate segmentation compared to manual annotations by an experienced
radiologist. The average difference in total prostate volume estimation is less
than 5%.
Related papers
- Multi-modality transrectal ultrasound video classification for
identification of clinically significant prostate cancer [4.896561300855359]
We propose a framework for the classification of clinically significant prostate cancer (csPCa) from multi-modality TRUS videos.
The proposed framework is evaluated on an in-house dataset containing 512 TRUS videos.
arXiv Detail & Related papers (2024-02-14T07:06:30Z) - Cancer-Net PCa-Gen: Synthesis of Realistic Prostate Diffusion Weighted
Imaging Data via Anatomic-Conditional Controlled Latent Diffusion [68.45407109385306]
In Canada, prostate cancer is the most common form of cancer in men and accounted for 20% of new cancer cases for this demographic in 2022.
There has been significant interest in the development of deep neural networks for prostate cancer diagnosis, prognosis, and treatment planning using diffusion weighted imaging (DWI) data.
In this study, we explore the efficacy of latent diffusion for generating realistic prostate DWI data through the introduction of an anatomic-conditional controlled latent diffusion strategy.
arXiv Detail & Related papers (2023-11-30T15:11:03Z) - Thoracic Cartilage Ultrasound-CT Registration using Dense Skeleton Graph [49.11220791279602]
It is challenging to accurately map planned paths from a generic atlas to individual patients, particularly for thoracic applications.
A graph-based non-rigid registration is proposed to enable transferring planned paths from the atlas to the current setup.
arXiv Detail & Related papers (2023-07-07T18:57:21Z) - Prostate Lesion Estimation using Prostate Masks from Biparametric MRI [0.0]
Biparametric MRI has emerged as an alternative to multiparametric prostate MRI.
One major issue with biparametric MRI is difficulty to detect clinically significant prostate cancer (csPCA)
Deep learning algorithms have emerged as an alternative solution to detect csPCA in cohort studies.
arXiv Detail & Related papers (2023-01-11T13:20:24Z) - ProstAttention-Net: A deep attention model for prostate cancer
segmentation by aggressiveness in MRI scans [4.964026843682986]
We propose a novel end-to-end multi-class network that jointly segments the prostate gland and cancer lesions with Gleason score (GS) group grading.
Our model achieves 69.0% $pm$14.5% sensitivity at 2.9 false positive per patient on the whole prostate and 70.8% $pm$14.4% sensitivity at 1.5 false positive when considering the peripheral zone (PZ) only.
arXiv Detail & Related papers (2022-11-23T16:21:21Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Comparison of automatic prostate zones segmentation models in MRI images
using U-net-like architectures [0.9786690381850356]
Prostate cancer is the sixth leading cause of cancer death in males worldwide.
Currently, the segmentation of Regions of Interest (ROI) containing a tumor tissue is carried out manually by expert doctors.
Several research works have tackled the challenge of automatically segmenting and extracting features of the ROI from magnetic resonance images.
In this work, six deep learning models were trained and analyzed with a dataset of MRI images obtained from the Centre Hospitalaire de Dijon and Universitat Politecnica de Catalunya.
arXiv Detail & Related papers (2022-07-19T18:00:41Z) - Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification [55.96221340756895]
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up.
We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data.
Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time.
The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$circ$ (1.2$circ$) from
arXiv Detail & Related papers (2020-10-06T13:55:02Z) - Anisotropic 3D Multi-Stream CNN for Accurate Prostate Segmentation from
Multi-Planar MRI [7.458812893013963]
We propose an anisotropic 3D multi-stream CNN architecture, which processes additional scan directions to produce a higher-resolution isotropic prostate segmentation.
We compare two variants of our architecture, which work on two (dual-plane) and three (triple-plane) image orientations, respectively.
arXiv Detail & Related papers (2020-09-23T12:56:14Z) - HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task
U-Net for Accurate Prostate Segmentation [56.86396352441269]
We tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate.
To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary.
By contrast, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificities of different tasks are
arXiv Detail & Related papers (2020-05-21T02:53:52Z) - Deep Attentive Features for Prostate Segmentation in 3D Transrectal
Ultrasound [59.105304755899034]
This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in transrectal ultrasound (TRUS) images.
Our attention module utilizes the attention mechanism to selectively leverage the multilevel features integrated from different layers.
Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance.
arXiv Detail & Related papers (2019-07-03T05:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.