RA V-Net: Deep learning network for automated liver segmentation
- URL: http://arxiv.org/abs/2112.08232v2
- Date: Thu, 16 Dec 2021 03:30:04 GMT
- Title: RA V-Net: Deep learning network for automated liver segmentation
- Authors: Zhiqi Lee, Sumin Qi, Chongchong Fan, Ziwei Xie
- Abstract summary: RA V-Net is an improved medical image automatic segmentation model based on U-Net.
With more complex convolution layers and skip connections, it obtains a higher level of image feature extraction capability.
The most representative metric for the segmentation effect is DSC, which improves 0.1107 over U-Net.
- Score: 1.6795461001108098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate segmentation of the liver is a prerequisite for the diagnosis of
disease. Automated segmentation is an important application of computer-aided
detection and diagnosis of liver disease. In recent years, automated processing
of medical images has gained breakthroughs. However, the low contrast of
abdominal scan CT images and the complexity of liver morphology make accurate
automatic segmentation challenging. In this paper, we propose RA V-Net, which
is an improved medical image automatic segmentation model based on U-Net. It
has the following three main innovations. CofRes Module (Composite Original
Feature Residual Module) is proposed. With more complex convolution layers and
skip connections to make it obtain a higher level of image feature extraction
capability and prevent gradient disappearance or explosion. AR Module
(Attention Recovery Module) is proposed to reduce the computational effort of
the model. In addition, the spatial features between the data pixels of the
encoding and decoding modules are sensed by adjusting the channels and LSTM
convolution. Finally, the image features are effectively retained. CA Module
(Channel Attention Module) is introduced, which used to extract relevant
channels with dependencies and strengthen them by matrix dot product, while
weakening irrelevant channels without dependencies. The purpose of channel
attention is achieved. The attention mechanism provided by LSTM convolution and
CA Module are strong guarantees for the performance of the neural network. The
accuracy of U-Net network: 0.9862, precision: 0.9118, DSC: 0.8547, JSC: 0.82.
The evaluation metrics of RA V-Net, accuracy: 0.9968, precision: 0.9597, DSC:
0.9654, JSC: 0.9414. The most representative metric for the segmentation effect
is DSC, which improves 0.1107 over U-Net, and JSC improves 0.1214.
Related papers
- Multi-Layer Feature Fusion with Cross-Channel Attention-Based U-Net for Kidney Tumor Segmentation [0.0]
U-Net based deep learning techniques are emerging as a promising approach for automated medical image segmentation.
We present an improved U-Net based model for end-to-end automated semantic segmentation of CT scan images to identify renal tumors.
arXiv Detail & Related papers (2024-10-20T19:02:41Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - Channel Attention Separable Convolution Network for Skin Lesion
Segmentation [2.8636163472272576]
We propose a novel network called Channel Attention Separable Convolution Network (CASCN) for skin lesions segmentation.
CASCN achieves state-of-the-art performance on the PH2 dataset with Dice coefficient similarity of 0.9461 and accuracy of 0.9645.
arXiv Detail & Related papers (2023-09-03T04:20:28Z) - Deep Learning Framework with Multi-Head Dilated Encoders for Enhanced
Segmentation of Cervical Cancer on Multiparametric Magnetic Resonance Imaging [0.6597195879147557]
T2-weighted magnetic resonance imaging (MRI) and diffusion-weighted imaging (DWI) are essential components for cervical cancer diagnosis.
We propose a novel multi-head framework that uses dilated convolutions and shared residual connections for separate encoding of multiparametric MRI images.
arXiv Detail & Related papers (2023-06-19T19:41:21Z) - DopUS-Net: Quality-Aware Robotic Ultrasound Imaging based on Doppler
Signal [48.97719097435527]
DopUS-Net combines the Doppler images with B-mode images to increase the segmentation accuracy and robustness of small blood vessels.
An artery re-identification module qualitatively evaluate the real-time segmentation results and automatically optimize the probe pose for enhanced Doppler images.
arXiv Detail & Related papers (2023-05-15T18:19:29Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - RetiFluidNet: A Self-Adaptive and Multi-Attention Deep Convolutional
Network for Retinal OCT Fluid Segmentation [3.57686754209902]
Quantification of retinal fluids is necessary for OCT-guided treatment management.
New convolutional neural architecture named RetiFluidNet is proposed for multi-class retinal fluid segmentation.
Model benefits from hierarchical representation learning of textural, contextual, and edge features.
arXiv Detail & Related papers (2022-09-26T07:18:00Z) - Decoupled Pyramid Correlation Network for Liver Tumor Segmentation from
CT images [22.128902125820193]
We propose a Decoupled Pyramid Correlation Network (DPC-Net)
It exploits attention mechanisms to fully leverage both low and high-level features embedded in FCN to segment liver tumor.
It achieves a competitive results with a DSC of 96.2% and an ASSD of 1.636 mm for liver segmentation.
arXiv Detail & Related papers (2022-05-26T07:31:29Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Collaborative Boundary-aware Context Encoding Networks for Error Map
Prediction [65.44752447868626]
We propose collaborative boundaryaware context encoding networks called AEP-Net for error prediction task.
Specifically, we propose a collaborative feature transformation branch for better feature fusion between images and masks, and precise localization of error regions.
The AEP-Net achieves an average DSC of 0.8358, 0.8164 for error prediction task, and shows a high Pearson correlation coefficient of 0.9873.
arXiv Detail & Related papers (2020-06-25T12:42:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.