3D EAGAN: 3D edge-aware attention generative adversarial network for
prostate segmentation in transrectal ultrasound images
- URL: http://arxiv.org/abs/2311.04049v1
- Date: Tue, 7 Nov 2023 15:03:17 GMT
- Title: 3D EAGAN: 3D edge-aware attention generative adversarial network for
prostate segmentation in transrectal ultrasound images
- Authors: Mengqing Liu, Xiao Shao, Liping Jiang, Kaizhi Wu
- Abstract summary: A 3D edge-aware attention generative adversarial network (3D EAGAN)-based prostate segmentation method is proposed in this paper.
EASNet is composed of an encoder-decoder-based U-Net backbone network, a detail compensation module, four 3D spatial and channel attention modules, an edge enhance module, and a global feature extractor.
- Score: 1.2728267483418159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic prostate segmentation in TRUS images has always been a challenging
problem, since prostates in TRUS images have ambiguous boundaries and
inhomogeneous intensity distribution. Although many prostate segmentation
methods have been proposed, they still need to be improved due to the lack of
sensibility to edge information. Consequently, the objective of this study is
to devise a highly effective prostate segmentation method that overcomes these
limitations and achieves accurate segmentation of prostates in TRUS images. A
3D edge-aware attention generative adversarial network (3D EAGAN)-based
prostate segmentation method is proposed in this paper, which consists of an
edge-aware segmentation network (EASNet) that performs the prostate
segmentation and a discriminator network that distinguishes predicted prostates
from real prostates. The proposed EASNet is composed of an
encoder-decoder-based U-Net backbone network, a detail compensation module,
four 3D spatial and channel attention modules, an edge enhance module, and a
global feature extractor. The detail compensation module is proposed to
compensate for the loss of detailed information caused by the down-sampling
process of the encoder. The features of the detail compensation module are
selectively enhanced by the 3D spatial and channel attention module.
Furthermore, an edge enhance module is proposed to guide shallow layers in the
EASNet to focus on contour and edge information in prostates. Finally, features
from shallow layers and hierarchical features from the decoder module are fused
through the global feature extractor to predict the segmentation prostates.
Related papers
- Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - BCS-Net: Boundary, Context and Semantic for Automatic COVID-19 Lung
Infection Segmentation from CT Images [83.82141604007899]
BCS-Net is a novel network for automatic COVID-19 lung infection segmentation from CT images.
BCS-Net follows an encoder-decoder architecture, and more designs focus on the decoder stage.
In each BCSR block, the attention-guided global context (AGGC) module is designed to learn the most valuable encoder features for decoder.
arXiv Detail & Related papers (2022-07-17T08:54:07Z) - Recurrent Feature Propagation and Edge Skip-Connections for Automatic
Abdominal Organ Segmentation [13.544665065396373]
We propose a 3D network with four main components trained end-to-end including encoder, edge detector, decoder with edge skip-connections and recurrent feature propagation head.
Experimental results show that the proposed network outperforms several state-of-the-art models.
arXiv Detail & Related papers (2022-01-02T08:33:19Z) - BEFD: Boundary Enhancement and Feature Denoising for Vessel Segmentation [15.386077363312372]
We propose Boundary Enhancement and Feature Denoising (BEFD) module to facilitate the network ability of extracting boundary information in semantic segmentation.
By introducing Sobel edge detector, the network is able to acquire additional edge prior, thus enhancing boundary in an unsupervised manner for medical image segmentation.
arXiv Detail & Related papers (2021-04-08T13:44:47Z) - Global Guidance Network for Breast Lesion Segmentation in Ultrasound
Images [84.03487786163781]
We develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection modules.
Our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation.
arXiv Detail & Related papers (2021-04-05T13:15:22Z) - S3Net: 3D LiDAR Sparse Semantic Segmentation Network [1.330528227599978]
S3Net is a novel convolutional neural network for LiDAR point cloud semantic segmentation.
It adopts an encoder-decoder backbone that consists of Sparse Intra-channel Attention Module (SIntraAM) and Sparse Inter-channel Attention Module (SInterAM)
arXiv Detail & Related papers (2021-03-15T22:15:24Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - DSU-net: Dense SegU-net for automatic head-and-neck tumor segmentation
in MR images [30.747375849126925]
We propose a Dense SegU-net (DSU-net) framework for automatic nasopharyngeal carcinoma (NPC) segmentation in MRI.
To combat the potential vanishing-gradient problem, we introduce dense blocks which can facilitate feature propagation and reuse.
Our proposed architecture outperforms the existing state-of-the-art segmentation networks.
arXiv Detail & Related papers (2020-06-11T09:33:41Z) - Unsupervised Instance Segmentation in Microscopy Images via Panoptic
Domain Adaptation and Task Re-weighting [86.33696045574692]
We propose a Cycle Consistency Panoptic Domain Adaptive Mask R-CNN (CyC-PDAM) architecture for unsupervised nuclei segmentation in histopathology images.
We first propose a nuclei inpainting mechanism to remove the auxiliary generated objects in the synthesized images.
Secondly, a semantic branch with a domain discriminator is designed to achieve panoptic-level domain adaptation.
arXiv Detail & Related papers (2020-05-05T11:08:26Z) - Deep Attentive Features for Prostate Segmentation in 3D Transrectal
Ultrasound [59.105304755899034]
This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in transrectal ultrasound (TRUS) images.
Our attention module utilizes the attention mechanism to selectively leverage the multilevel features integrated from different layers.
Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance.
arXiv Detail & Related papers (2019-07-03T05:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.