Rethinking the Extraction and Interaction of Multi-Scale Features for
Vessel Segmentation
- URL: http://arxiv.org/abs/2010.04428v1
- Date: Fri, 9 Oct 2020 08:22:54 GMT
- Title: Rethinking the Extraction and Interaction of Multi-Scale Features for
Vessel Segmentation
- Authors: Yicheng Wu, Chengwei Pan, Shuqi Wang, Ming Zhang, Yong Xia, Yizhou Yu
- Abstract summary: We propose a novel deep learning model called PC-Net to segment retinal vessels and major arteries in 2D fundus image and 3D computed tomography angiography (CTA) scans.
In PC-Net, the pyramid squeeze-and-excitation (PSE) module introduces spatial information to each convolutional block, boosting its ability to extract more effective multi-scale features.
- Score: 53.187152856583396
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Analyzing the morphological attributes of blood vessels plays a critical role
in the computer-aided diagnosis of many cardiovascular and ophthalmologic
diseases. Although being extensively studied, segmentation of blood vessels,
particularly thin vessels and capillaries, remains challenging mainly due to
the lack of an effective interaction between local and global features. In this
paper, we propose a novel deep learning model called PC-Net to segment retinal
vessels and major arteries in 2D fundus image and 3D computed tomography
angiography (CTA) scans, respectively. In PC-Net, the pyramid
squeeze-and-excitation (PSE) module introduces spatial information to each
convolutional block, boosting its ability to extract more effective multi-scale
features, and the coarse-to-fine (CF) module replaces the conventional decoder
to enhance the details of thin vessels and process hard-to-classify pixels
again. We evaluated our PC-Net on the Digital Retinal Images for Vessel
Extraction (DRIVE) database and an in-house 3D major artery (3MA) database
against several recent methods. Our results not only demonstrate the
effectiveness of the proposed PSE module and CF module, but also suggest that
our proposed PC-Net sets new state of the art in the segmentation of retinal
vessels (AUC: 98.31%) and major arteries (AUC: 98.35%) on both databases,
respectively.
Related papers
- KLDD: Kalman Filter based Linear Deformable Diffusion Model in Retinal Image Segmentation [51.03868117057726]
This paper proposes a novel Kalman filter based Linear Deformable Diffusion (KLDD) model for retinal vessel segmentation.
Our model employs a diffusion process that iteratively refines the segmentation, leveraging the flexible receptive fields of deformable convolutions.
Experiments are evaluated on retinal fundus image datasets (DRIVE, CHASE_DB1) and the 3mm and 6mm of the OCTA-500 dataset.
arXiv Detail & Related papers (2024-09-19T14:21:38Z) - Deep Learning for Vascular Segmentation and Applications in Phase
Contrast Tomography Imaging [33.23991248643144]
We present a thorough literature review, highlighting the state of machine learning techniques across diverse organs.
Our goal is to provide a foundation on the topic and identify a robust baseline model for application to vascular segmentation in a new imaging modality.
HiP CT enables 3D imaging of complete organs at an unprecedented resolution of ca. 20mm per voxel.
arXiv Detail & Related papers (2023-11-22T11:15:38Z) - Synthetic optical coherence tomography angiographs for detailed retinal
vessel segmentation without human annotations [12.571349114534597]
We present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis.
We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets.
arXiv Detail & Related papers (2023-06-19T14:01:47Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes
for Medical Image Super-Resolution [56.622832383316215]
We propose a novel multi-head convolutional attention module to super-resolve CT and MRI scans.
Our attention module uses the convolution operation to perform joint spatial-channel attention on multiple input tensors.
We introduce multiple attention heads, each head having a distinct receptive field size corresponding to a particular reduction rate for the spatial attention.
arXiv Detail & Related papers (2022-04-08T07:56:55Z) - Pulmonary Vessel Segmentation based on Orthogonal Fused U-Net++ of Chest
CT Images [1.8692254863855962]
We present an effective framework and refinement process of pulmonary vessel segmentation from chest computed tomographic (CT) images.
The key to our approach is a 2.5D segmentation network applied from three axes, which presents a robust and fully automated pulmonary vessel segmentation result.
Our method outperforms other network structures by a large margin and achieves by far the highest average DICE score of 0.9272 and precision of 0.9310.
arXiv Detail & Related papers (2021-07-03T21:46:29Z) - Contextual Information Enhanced Convolutional Neural Networks for
Retinal Vessel Segmentation in Color Fundus Images [0.0]
An automatic retinal vessel segmentation system can effectively facilitate clinical diagnosis and ophthalmological research.
A deep learning based method has been proposed and several customized modules have been integrated into the well-known encoder-decoder architecture U-net.
As a result, the proposed method outperforms the work of predecessors and achieves state-of-the-art performance in Sensitivity/Recall, F1-score and MCC.
arXiv Detail & Related papers (2021-03-25T06:10:47Z) - Multi-Task Neural Networks with Spatial Activation for Retinal Vessel
Segmentation and Artery/Vein Classification [49.64863177155927]
We propose a multi-task deep neural network with spatial activation mechanism to segment full retinal vessel, artery and vein simultaneously.
The proposed network achieves pixel-wise accuracy of 95.70% for vessel segmentation, and A/V classification accuracy of 94.50%, which is the state-of-the-art performance for both tasks.
arXiv Detail & Related papers (2020-07-18T05:46:47Z) - ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New
Model [41.444917622855606]
We release a dedicated OCT-A SEgmentation dataset (ROSE), which consists of 229 OCT-A images with vessel annotations at either centerline-level or pixel level.
Secondly, we propose a novel Split-based Coarse-to-Fine vessel segmentation network (SCF-Net), with the ability to detect thick and thin vessels separately.
In the SCF-Net, a split-based coarse segmentation (SCS) module is first introduced to produce a preliminary confidence map of vessels, and a split-based refinement (SRN) module is then used to optimize the shape/contour of
arXiv Detail & Related papers (2020-07-10T06:54:19Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.