(M)SLAe-Net: Multi-Scale Multi-Level Attention embedded Network for
Retinal Vessel Segmentation
- URL: http://arxiv.org/abs/2109.02084v1
- Date: Sun, 5 Sep 2021 14:29:00 GMT
- Title: (M)SLAe-Net: Multi-Scale Multi-Level Attention embedded Network for
Retinal Vessel Segmentation
- Authors: Shreshth Saini, Geetika Agrawal
- Abstract summary: We propose a multi-scale, multi-level attention embedded CNN architecture ((M)SLAe-Net) to address the issue of multi-stage processing.
We do this by extracting features at multiple scales and multiple levels of the network, enabling our model to holistically extracts the local and global features.
Our unique network design and novel D-DPP module with efficient task-specific loss function for thin vessels enabled our model for better cross data performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmentation plays a crucial role in diagnosis. Studying the retinal
vasculatures from fundus images help identify early signs of many crucial
illnesses such as diabetic retinopathy. Due to the varying shape, size, and
patterns of retinal vessels, along with artefacts and noises in fundus images,
no one-stage method can accurately segment retinal vessels. In this work, we
propose a multi-scale, multi-level attention embedded CNN architecture
((M)SLAe-Net) to address the issue of multi-stage processing for robust and
precise segmentation of retinal vessels. We do this by extracting features at
multiple scales and multiple levels of the network, enabling our model to
holistically extracts the local and global features. Multi-scale features are
extracted using our novel dynamic dilated pyramid pooling (D-DPP) module. We
also aggregate the features from all the network levels. These effectively
resolved the issues of varying shapes and artefacts and hence the need for
multiple stages. To assist in better pixel-level classification, we use the
Squeeze and Attention(SA) module, a smartly adapted version of the Squeeze and
Excitation(SE) module for segmentation tasks in our network to facilitate
pixel-group attention. Our unique network design and novel D-DPP module with
efficient task-specific loss function for thin vessels enabled our model for
better cross data performance. Exhaustive experimental results on DRIVE, STARE,
HRF, and CHASE-DB1 show the superiority of our method.
Related papers
- Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention [1.1155836879100416]
We propose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical image segmentation.
MFMSA block refines the process of spatial feature extraction, particularly in capturing boundary features.
E-SDM mitigates information loss in multi-task learning with deep supervision.
arXiv Detail & Related papers (2024-05-10T07:34:36Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Scale-aware Super-resolution Network with Dual Affinity Learning for
Lesion Segmentation from Medical Images [50.76668288066681]
We present a scale-aware super-resolution network to adaptively segment lesions of various sizes from low-resolution medical images.
Our proposed network achieved consistent improvements compared to other state-of-the-art methods.
arXiv Detail & Related papers (2023-05-30T14:25:55Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - RetiFluidNet: A Self-Adaptive and Multi-Attention Deep Convolutional
Network for Retinal OCT Fluid Segmentation [3.57686754209902]
Quantification of retinal fluids is necessary for OCT-guided treatment management.
New convolutional neural architecture named RetiFluidNet is proposed for multi-class retinal fluid segmentation.
Model benefits from hierarchical representation learning of textural, contextual, and edge features.
arXiv Detail & Related papers (2022-09-26T07:18:00Z) - Multi-level Second-order Few-shot Learning [111.0648869396828]
We propose a Multi-level Second-order (MlSo) few-shot learning network for supervised or unsupervised few-shot image classification and few-shot action recognition.
We leverage so-called power-normalized second-order base learner streams combined with features that express multiple levels of visual abstraction.
We demonstrate respectable results on standard datasets such as Omniglot, mini-ImageNet, tiered-ImageNet, Open MIC, fine-grained datasets such as CUB Birds, Stanford Dogs and Cars, and action recognition datasets such as HMDB51, UCF101, and mini-MIT.
arXiv Detail & Related papers (2022-01-15T19:49:00Z) - w-Net: Dual Supervised Medical Image Segmentation Model with
Multi-Dimensional Attention and Cascade Multi-Scale Convolution [47.56835064059436]
Multi-dimensional attention segmentation model with cascade multi-scale convolution is proposed to predict accurate segmentation for small objects in medical images.
The proposed method is evaluated on three datasets: KiTS19, Pancreas CT of Decathlon-10, and MICCAI 2018 LiTS Challenge.
arXiv Detail & Related papers (2020-11-15T13:54:22Z) - Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention
and Dynamic Resampling [13.542898009730804]
The performance of relevant algorithms is significantly affected by the proper fusion of the multi-modal information.
We present the Max-Fusion U-Net that achieves improved pathology segmentation performance.
We evaluate our methods using the Myocardial pathology segmentation (MyoPS) combining the multi-sequence CMR dataset.
arXiv Detail & Related papers (2020-09-05T17:24:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.