Retinal Vessel Segmentation with Pixel-wise Adaptive Filters
- URL: http://arxiv.org/abs/2202.01782v1
- Date: Thu, 3 Feb 2022 14:40:36 GMT
- Title: Retinal Vessel Segmentation with Pixel-wise Adaptive Filters
- Authors: Mingxing Li, Shenglong Zhou, Chang Chen, Yueyi Zhang, Dong Liu, Zhiwei
Xiong
- Abstract summary: We propose two novel methods to address the challenges of retinal vessel segmentation.
First, we devise a light-weight module, named multi-scale residual similarity gathering (MRSG), to generate pixel-wise adaptive filters (PA-Filters)
Second, we introduce a response cue erasing (RCE) strategy to enhance the segmentation accuracy.
- Score: 47.8629995041574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate retinal vessel segmentation is challenging because of the complex
texture of retinal vessels and low imaging contrast. Previous methods generally
refine segmentation results by cascading multiple deep networks, which are
time-consuming and inefficient. In this paper, we propose two novel methods to
address these challenges. First, we devise a light-weight module, named
multi-scale residual similarity gathering (MRSG), to generate pixel-wise
adaptive filters (PA-Filters). Different from cascading multiple deep networks,
only one PA-Filter layer can improve the segmentation results. Second, we
introduce a response cue erasing (RCE) strategy to enhance the segmentation
accuracy. Experimental results on the DRIVE, CHASE_DB1, and STARE datasets
demonstrate that our proposed method outperforms state-of-the-art methods while
maintaining a compact structure. Code is available at
https://github.com/Limingxing00/Retinal-Vessel-Segmentation-ISBI20222.
Related papers
- SGLP: A Similarity Guided Fast Layer Partition Pruning for Compressing Large Deep Models [19.479746878680707]
Layer pruning is a potent approach to reduce network size and improve computational efficiency.
We propose a Similarity Guided fast Layer Partition pruning for compressing large deep models.
Our method outperforms the state-of-the-art methods in both accuracy and computational efficiency.
arXiv Detail & Related papers (2024-10-14T04:01:08Z) - CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters [0.0]
Image dehazing aims to restore image clarity and visual quality by reducing atmospheric scattering and absorption effects.
Inspired by dynamic filtering, we propose using cascaded dynamic filters to create a multi-branch network.
Experiments on RESIDE, Haze4K, and O-Haze datasets validate our method's effectiveness.
arXiv Detail & Related papers (2024-09-13T03:20:38Z) - Image-level Regression for Uncertainty-aware Retinal Image Segmentation [3.7141182051230914]
We introduce a novel Uncertainty-Aware (SAUNA) transform, which adds pixel uncertainty to the ground truth.
Our results indicate that the integration of the SAUNA transform and these segmentation losses led to significant performance boosts for different segmentation models.
arXiv Detail & Related papers (2024-05-27T04:17:10Z) - Complementary Random Masking for RGB-Thermal Semantic Segmentation [63.93784265195356]
RGB-thermal semantic segmentation is a potential solution to achieve reliable semantic scene understanding in adverse weather and lighting conditions.
This paper proposes 1) a complementary random masking strategy of RGB-T images and 2) self-distillation loss between clean and masked input modalities.
We achieve state-of-the-art performance over three RGB-T semantic segmentation benchmarks.
arXiv Detail & Related papers (2023-03-30T13:57:21Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - Deep ensembles based on Stochastic Activation Selection for Polyp
Segmentation [82.61182037130406]
This work deals with medical image segmentation and in particular with accurate polyp detection and segmentation during colonoscopy examinations.
Basic architecture in image segmentation consists of an encoder and a decoder.
We compare some variant of the DeepLab architecture obtained by varying the decoder backbone.
arXiv Detail & Related papers (2021-04-02T02:07:37Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - MuCAN: Multi-Correspondence Aggregation Network for Video
Super-Resolution [63.02785017714131]
Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame.
Inter- and intra-frames are the key sources for exploiting temporal and spatial information.
We build an effective multi-correspondence aggregation network (MuCAN) for VSR.
arXiv Detail & Related papers (2020-07-23T05:41:27Z) - A deep primal-dual proximal network for image restoration [8.797434238081372]
We design a deep network, named DeepPDNet, built from primal-dual iterations associated with the minimization of a standard penalized likelihood with an analysis prior.
Two different learning strategies: "Full learning" and "Partial learning" are proposed, the first one is the most efficient numerically.
Extensive results show that the proposed DeepPDNet demonstrates excellent performance on the MNIST and the more complex BSD68, BSD100, and SET14 datasets for image restoration and single image super-resolution task.
arXiv Detail & Related papers (2020-07-02T08:29:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.