FAU-Net: An Attention U-Net Extension with Feature Pyramid Attention for
Prostate Cancer Segmentation
- URL: http://arxiv.org/abs/2309.01322v1
- Date: Mon, 4 Sep 2023 02:54:58 GMT
- Title: FAU-Net: An Attention U-Net Extension with Feature Pyramid Attention for
Prostate Cancer Segmentation
- Authors: Pablo Cesar Quihui-Rubio and Daniel Flores-Araiza and Miguel
Gonzalez-Mendoza and Christian Mata and Gilberto Ochoa-Ruiz
- Abstract summary: This contribution presents a deep learning method for the segmentation of prostate zones in MRI images based on U-Net.
The proposed model is compared to seven different U-Net-based architectures.
- Score: 1.8499314936771563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This contribution presents a deep learning method for the segmentation of
prostate zones in MRI images based on U-Net using additive and feature pyramid
attention modules, which can improve the workflow of prostate cancer detection
and diagnosis. The proposed model is compared to seven different U-Net-based
architectures. The automatic segmentation performance of each model of the
central zone (CZ), peripheral zone (PZ), transition zone (TZ) and Tumor were
evaluated using Dice Score (DSC), and the Intersection over Union (IoU)
metrics. The proposed alternative achieved a mean DSC of 84.15% and IoU of
76.9% in the test set, outperforming most of the studied models in this work
except from R2U-Net and attention R2U-Net architectures.
Related papers
- Y-CA-Net: A Convolutional Attention Based Network for Volumetric Medical Image Segmentation [47.12719953712902]
discriminative local features are key components for the performance of attention-based VS methods.
We incorporate the convolutional encoder branch with transformer backbone to extract local and global features in a parallel manner.
Y-CT-Net achieves competitive performance on multiple medical segmentation tasks.
arXiv Detail & Related papers (2024-10-01T18:50:45Z) - M3BUNet: Mobile Mean Max UNet for Pancreas Segmentation on CT-Scans [25.636974007788986]
We propose M3BUNet, a fusion of MobileNet and U-Net neural networks, equipped with a novel Mean-Max (MM) attention that operates in two stages to gradually segment pancreas CT images.
For the fine segmentation stage, we found that applying a wavelet decomposition filter to create multi-input images enhances pancreas segmentation performance.
Our approach demonstrates a considerable performance improvement, achieving an average Dice Similarity Coefficient (DSC) value of up to 89.53% and an Intersection Over Union (IOU) score of up to 81.16 for the NIH pancreas dataset.
arXiv Detail & Related papers (2024-01-18T23:10:08Z) - OCU-Net: A Novel U-Net Architecture for Enhanced Oral Cancer
Segmentation [22.652902408898733]
This study proposes OCU-Net, a pioneering U-Net image segmentation architecture exclusively designed to detect oral cancer.
OCU-Net incorporates advanced deep learning modules, such as the Channel and Spatial Attention Fusion (CSAF) module.
The incorporation of these modules showed superior performance for oral cancer segmentation for two datasets used in this research.
arXiv Detail & Related papers (2023-10-03T23:25:19Z) - Assessing the performance of deep learning-based models for prostate
cancer segmentation using uncertainty scores [1.0499611180329804]
The aim is to improve the workflow of prostate cancer detection and diagnosis.
The top-performing model is the Attention R2U-Net, achieving a mean Intersection over Union (IoU) of 76.3% and Dice Similarity Coefficient (DSC) of 85% for segmenting all zones.
arXiv Detail & Related papers (2023-08-09T01:38:58Z) - Semantic segmentation of surgical hyperspectral images under geometric
domain shifts [69.91792194237212]
We present the first analysis of state-of-the-art semantic segmentation networks in the presence of geometric out-of-distribution (OOD) data.
We also address generalizability with a dedicated augmentation technique termed "Organ Transplantation"
Our scheme improves on the SOA DSC by up to 67 % (RGB) and 90 % (HSI) and renders performance on par with in-distribution performance on real OOD test data.
arXiv Detail & Related papers (2023-03-20T09:50:07Z) - Comparative analysis of deep learning approaches for AgNOR-stained
cytology samples interpretation [52.77024349608834]
This paper provides a way to analyze argyrophilic nucleolar organizer regions (AgNOR) stained slide using deep learning approaches.
Our results show that the semantic segmentation using U-Net with ResNet-18 or ResNet-34 as the backbone have similar results.
The best model shows an IoU for nucleus, cluster, and satellites of 0.83, 0.92, and 0.99 respectively.
arXiv Detail & Related papers (2022-10-19T15:15:32Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - KiU-Net: Towards Accurate Segmentation of Biomedical Images using
Over-complete Representations [59.65174244047216]
We propose an over-complete architecture (Ki-Net) which involves projecting the data onto higher dimensions.
This network, when augmented with U-Net, results in significant improvements in the case of segmenting small anatomical landmarks.
We evaluate the proposed method on the task of brain anatomy segmentation from 2D Ultrasound of preterm neonates.
arXiv Detail & Related papers (2020-06-08T18:59:24Z) - Convolutional Neural Networks based automated segmentation and labelling
of the lumbar spine X-ray [0.0]
The aim of this study is to investigate the segmentation accuracies of different segmentation networks trained on 730 manually annotated lateral lumbar spine X-rays.
Instance segmentation networks were compared to semantic segmentation networks.
arXiv Detail & Related papers (2020-04-04T20:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.