Residual Spatial Attention Network for Retinal Vessel Segmentation
- URL: http://arxiv.org/abs/2009.08829v1
- Date: Fri, 18 Sep 2020 13:17:13 GMT
- Title: Residual Spatial Attention Network for Retinal Vessel Segmentation
- Authors: Changlu Guo, M\'arton Szemenyei, Yugen Yi, Wei Zhou, Haodong Bian
- Abstract summary: We propose the Residual Spatial Attention Network (RSAN) for retinal vessel segmentation.
RSAN employs a modified residual block structure that integrates DropBlock.
In order to further improve the representation capability of the network, we introduce the spatial attention (SA) and propose the Residual Spatial Attention Block (RSAB)
- Score: 6.513112974264861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliable segmentation of retinal vessels can be employed as a way of
monitoring and diagnosing certain diseases, such as diabetes and hypertension,
as they affect the retinal vascular structure. In this work, we propose the
Residual Spatial Attention Network (RSAN) for retinal vessel segmentation. RSAN
employs a modified residual block structure that integrates DropBlock, which
can not only be utilized to construct deep networks to extract more complex
vascular features, but can also effectively alleviate the overfitting.
Moreover, in order to further improve the representation capability of the
network, based on this modified residual block, we introduce the spatial
attention (SA) and propose the Residual Spatial Attention Block (RSAB) to build
RSAN. We adopt the public DRIVE and CHASE DB1 color fundus image datasets to
evaluate the proposed RSAN. Experiments show that the modified residual
structure and the spatial attention are effective in this work, and our
proposed RSAN achieves the state-of-the-art performance.
Related papers
- TPOT: Topology Preserving Optimal Transport in Retinal Fundus Image Enhancement [16.84367978693017]
We propose a training paradigm that regularizes blood vessel structures by minimizing the differences of persistence diagrams.
We call the resulting framework Topology Preserving Optimal Transport (TPOT)
Experimental results on a large-scale dataset demonstrate the superiority of the proposed method compared to several state-of-the-art supervised and unsupervised techniques.
arXiv Detail & Related papers (2024-11-03T02:04:35Z) - MDFI-Net: Multiscale Differential Feature Interaction Network for Accurate Retinal Vessel Segmentation [3.152646316470194]
This paper proposes a feature-enhanced interaction network based on DPCN, named MDFI-Net.
The proposed MDFI-Net achieves segmentation performance superior to state-of-the-art methods on public datasets.
arXiv Detail & Related papers (2024-10-20T16:42:22Z) - KLDD: Kalman Filter based Linear Deformable Diffusion Model in Retinal Image Segmentation [51.03868117057726]
This paper proposes a novel Kalman filter based Linear Deformable Diffusion (KLDD) model for retinal vessel segmentation.
Our model employs a diffusion process that iteratively refines the segmentation, leveraging the flexible receptive fields of deformable convolutions.
Experiments are evaluated on retinal fundus image datasets (DRIVE, CHASE_DB1) and the 3mm and 6mm of the OCTA-500 dataset.
arXiv Detail & Related papers (2024-09-19T14:21:38Z) - TBSN: Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising [94.09442506816724]
Blind-spot networks (BSN) have been prevalent network architectures in self-supervised image denoising (SSID)
We present a transformer-based blind-spot network (TBSN) by analyzing and redesigning the transformer operators that meet the blind-spot requirement.
For spatial self-attention, an elaborate mask is applied to the attention matrix to restrict its receptive field, thus mimicking the dilated convolution.
For channel self-attention, we observe that it may leak the blind-spot information when the channel number is greater than spatial size in the deep layers of multi-scale architectures.
arXiv Detail & Related papers (2024-04-11T15:39:10Z) - RSF-Conv: Rotation-and-Scale Equivariant Fourier Parameterized Convolution for Retinal Vessel Segmentation [58.618797429661754]
We propose a rotation-and-scale equivariant Fourier parameterized convolution (RSF-Conv) specifically for retinal vessel segmentation.
As a general module, RSF-Conv can be integrated into existing networks in a plug-and-play manner.
To demonstrate the effectiveness of RSF-Conv, we also apply RSF-Conv+U-Net and RSF-Conv+Iter-Net to retinal artery/vein classification.
arXiv Detail & Related papers (2023-09-27T13:14:57Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Claw U-Net: A Unet-based Network with Deep Feature Concatenation for
Scleral Blood Vessel Segmentation [18.10578418379116]
Sturge-Weber syndrome (SWS) is a vascular malformation disease, and it may cause blindness if the patient's condition is severe.
How to accurately segment scleral blood vessels has become a significant problem in computer-aided diagnosis.
Claw UNet outperforms other UNet-based networks on scleral blood vessel image dataset.
arXiv Detail & Related papers (2020-10-20T09:55:29Z) - Rethinking the Extraction and Interaction of Multi-Scale Features for
Vessel Segmentation [53.187152856583396]
We propose a novel deep learning model called PC-Net to segment retinal vessels and major arteries in 2D fundus image and 3D computed tomography angiography (CTA) scans.
In PC-Net, the pyramid squeeze-and-excitation (PSE) module introduces spatial information to each convolutional block, boosting its ability to extract more effective multi-scale features.
arXiv Detail & Related papers (2020-10-09T08:22:54Z) - Multi-Task Neural Networks with Spatial Activation for Retinal Vessel
Segmentation and Artery/Vein Classification [49.64863177155927]
We propose a multi-task deep neural network with spatial activation mechanism to segment full retinal vessel, artery and vein simultaneously.
The proposed network achieves pixel-wise accuracy of 95.70% for vessel segmentation, and A/V classification accuracy of 94.50%, which is the state-of-the-art performance for both tasks.
arXiv Detail & Related papers (2020-07-18T05:46:47Z) - Dense Residual Network for Retinal Vessel Segmentation [8.778525346264466]
We propose an efficient method to segment blood vessels in Scanning Laser Ophthalmoscopy retinal images.
Inspired by U-Net, "feature map reuse" and residual learning, we propose a deep dense residual network structure called DRNet.
Our method achieves the state-of-the-art performance even without data augmentation.
arXiv Detail & Related papers (2020-04-07T20:42:13Z) - SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation [4.6859605614050155]
We propose a lightweight network named Spatial Attention U-Net (SA-UNet) that does not require thousands of annotated training samples.
SA-UNet introduces a spatial attention module which infers the attention map along the spatial dimension, and multiplies the attention map by the input feature map for adaptive feature refinement.
We evaluate SA-UNet based on two benchmark retinal datasets: the Vascular Extraction (DRIVE) dataset and the Child Heart and Health Study (CHASE_DB1) dataset.
arXiv Detail & Related papers (2020-04-07T20:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.