MicroSegNet: A Deep Learning Approach for Prostate Segmentation on
Micro-Ultrasound Images
- URL: http://arxiv.org/abs/2305.19956v3
- Date: Thu, 25 Jan 2024 20:52:23 GMT
- Title: MicroSegNet: A Deep Learning Approach for Prostate Segmentation on
Micro-Ultrasound Images
- Authors: Hongxu Jiang, Muhammad Imran, Preethika Muralidharan, Anjali Patel,
Jake Pensa, Muxuan Liang, Tarik Benidir, Joseph R. Grajo, Jason P. Joseph,
Russell Terry, John Michael DiBianco, Li-Ming Su, Yuyin Zhou, Wayne G.
Brisbane, and Wei Shao
- Abstract summary: Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique that provides 3-4 times higher resolution than traditional ultrasound.
prostate segmentation on micro-US is challenging due to artifacts and indistinct borders between the prostate, bladder, and urethra in the midline.
This paper presents MicroSegNet, a multi-scale annotation-guided transformer UNet model designed specifically to tackle these challenges.
- Score: 10.10595151162924
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique that
provides 3-4 times higher resolution than traditional ultrasound, potentially
enabling low-cost, accurate diagnosis of prostate cancer. Accurate prostate
segmentation is crucial for prostate volume measurement, cancer diagnosis,
prostate biopsy, and treatment planning. However, prostate segmentation on
micro-US is challenging due to artifacts and indistinct borders between the
prostate, bladder, and urethra in the midline. This paper presents MicroSegNet,
a multi-scale annotation-guided transformer UNet model designed specifically to
tackle these challenges. During the training process, MicroSegNet focuses more
on regions that are hard to segment (hard regions), characterized by
discrepancies between expert and non-expert annotations. We achieve this by
proposing an annotation-guided binary cross entropy (AG-BCE) loss that assigns
a larger weight to prediction errors in hard regions and a lower weight to
prediction errors in easy regions. The AG-BCE loss was seamlessly integrated
into the training process through the utilization of multi-scale deep
supervision, enabling MicroSegNet to capture global contextual dependencies and
local information at various scales. We trained our model using micro-US images
from 55 patients, followed by evaluation on 20 patients. Our MicroSegNet model
achieved a Dice coefficient of 0.939 and a Hausdorff distance of 2.02 mm,
outperforming several state-of-the-art segmentation methods, as well as three
human annotators with different experience levels. Our code is publicly
available at https://github.com/mirthAI/MicroSegNet and our dataset is publicly
available at https://zenodo.org/records/10475293.
Related papers
- Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - Towards Confident Detection of Prostate Cancer using High Resolution
Micro-ultrasound [7.826781688190151]
Detection of prostate cancer during transrectal ultrasound-guided biopsy is challenging.
Recent advancements in high-frequency ultrasound imaging - micro-ultrasound - have drastically increased the capability of tissue imaging at high resolution.
Our aim is to investigate the development of a robust deep learning model specifically for micro-ultrasound-guided prostate cancer biopsy.
arXiv Detail & Related papers (2022-07-21T14:00:00Z) - Comparison of automatic prostate zones segmentation models in MRI images
using U-net-like architectures [0.9786690381850356]
Prostate cancer is the sixth leading cause of cancer death in males worldwide.
Currently, the segmentation of Regions of Interest (ROI) containing a tumor tissue is carried out manually by expert doctors.
Several research works have tackled the challenge of automatically segmenting and extracting features of the ROI from magnetic resonance images.
In this work, six deep learning models were trained and analyzed with a dataset of MRI images obtained from the Centre Hospitalaire de Dijon and Universitat Politecnica de Catalunya.
arXiv Detail & Related papers (2022-07-19T18:00:41Z) - Learning to segment prostate cancer by aggressiveness from scribbles in
bi-parametric MRI [0.0]
We propose a deep U-Net based model to tackle the challenging task of prostate cancer segmentation by aggressiveness in MRI based on weak annotations.
We show that we can approach the fully-supervised baseline in grading the lesions by using only 6.35% of voxels for training.
We report a lesion-wise Cohen's kappa score of 0.29 $pm$ 0.07 for the weak model versus 0.32 $pm$ 0.05 for the baseline.
arXiv Detail & Related papers (2022-07-01T11:52:05Z) - Global Guidance Network for Breast Lesion Segmentation in Ultrasound
Images [84.03487786163781]
We develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection modules.
Our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation.
arXiv Detail & Related papers (2021-04-05T13:15:22Z) - FocusNetv2: Imbalanced Large and Small Organ Segmentation with
Adversarial Shape Constraint for Head and Neck CT Images [82.48587399026319]
delineation of organs-at-risk (OARs) is a vital step in radiotherapy treatment planning to avoid damage to healthy organs.
We propose a novel two-stage deep neural network, FocusNetv2, to solve this challenging problem by automatically locating, ROI-pooling, and segmenting small organs.
In addition to our original FocusNet, we employ a novel adversarial shape constraint on small organs to ensure the consistency between estimated small-organ shapes and organ shape prior knowledge.
arXiv Detail & Related papers (2021-04-05T04:45:31Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Deep learning in magnetic resonance prostate segmentation: A review and
a new perspective [4.453410156617238]
We review the state-of-the-art deep learning algorithms in MR prostate segmentation.
We provide insights to the field by discussing their limitations and strengths.
We propose an optimised 2D U-Net for MR prostate segmentation.
arXiv Detail & Related papers (2020-11-16T08:58:38Z) - A weakly supervised registration-based framework for prostate
segmentation via the combination of statistical shape model and CNN [4.404555861424138]
We propose a weakly supervised registration-based framework for the precise prostate segmentation.
An inception-based neural network (SSM-Net) was exploited to predict the model transform, shape control parameters and a fine-tuning vector.
A residual U-net (ResU-Net) was employed to predict a probability label map from the input images.
arXiv Detail & Related papers (2020-07-23T00:24:57Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z) - Deep Attentive Features for Prostate Segmentation in 3D Transrectal
Ultrasound [59.105304755899034]
This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in transrectal ultrasound (TRUS) images.
Our attention module utilizes the attention mechanism to selectively leverage the multilevel features integrated from different layers.
Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance.
arXiv Detail & Related papers (2019-07-03T05:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.