RC-Net: A Convolutional Neural Network for Retinal Vessel Segmentation
- URL: http://arxiv.org/abs/2112.11078v1
- Date: Tue, 21 Dec 2021 10:24:01 GMT
- Title: RC-Net: A Convolutional Neural Network for Retinal Vessel Segmentation
- Authors: Tariq M Khan, Antonio Robles-Kelly, Syed S. Naqvi
- Abstract summary: We present RC-Net, a fully convolutional network, where the number of filters per layer is optimized to reduce feature overlapping and complexity.
In our experiments, RC-Net is quite competitive, outperforming alternatives vessels segmentation methods with two or even three orders of magnitude less trainable parameters.
- Score: 3.0846824529023387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over recent years, increasingly complex approaches based on sophisticated
convolutional neural network architectures have been slowly pushing performance
on well-established benchmark datasets. In this paper, we take a step back to
examine the real need for such complexity. We present RC-Net, a fully
convolutional network, where the number of filters per layer is optimized to
reduce feature overlapping and complexity. We also used skip connections to
keep spatial information loss to a minimum by keeping the number of pooling
operations in the network to a minimum. Two publicly available retinal vessel
segmentation datasets were used in our experiments. In our experiments, RC-Net
is quite competitive, outperforming alternatives vessels segmentation methods
with two or even three orders of magnitude less trainable parameters.
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - A Generalization of Continuous Relaxation in Structured Pruning [0.3277163122167434]
Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks.
We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal.
The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations.
arXiv Detail & Related papers (2023-08-28T14:19:13Z) - RFC-Net: Learning High Resolution Global Features for Medical Image
Segmentation on a Computational Budget [4.712700480142554]
We propose Receptive Field Chain Network (RFC-Net) that learns high resolution global features on a compressed computational space.
Our experiments demonstrate that RFC-Net achieves state-of-the-art performance on Kvasir and CVC-ClinicDB benchmarks for Polyp segmentation.
arXiv Detail & Related papers (2023-02-13T06:52:47Z) - Towards Bi-directional Skip Connections in Encoder-Decoder Architectures
and Beyond [95.46272735589648]
We propose backward skip connections that bring decoded features back to the encoder.
Our design can be jointly adopted with forward skip connections in any encoder-decoder architecture.
We propose a novel two-phase Neural Architecture Search (NAS) algorithm, namely BiX-NAS, to search for the best multi-scale skip connections.
arXiv Detail & Related papers (2022-03-11T01:38:52Z) - PDFNet: Pointwise Dense Flow Network for Urban-Scene Segmentation [0.0]
We propose a novel lightweight architecture named point-wise dense flow network (PDFNet)
In PDFNet, we employ dense, residual, and multiple shortcut connections to allow a smooth gradient flow to all parts of the network.
Our method significantly outperforms baselines in capturing small classes and in few-data regimes.
arXiv Detail & Related papers (2021-09-21T10:39:46Z) - Group Fisher Pruning for Practical Network Compression [58.25776612812883]
We present a general channel pruning approach that can be applied to various complicated structures.
We derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels.
Our method can be used to prune any structures including those with coupled channels.
arXiv Detail & Related papers (2021-08-02T08:21:44Z) - Training and Inference for Integer-Based Semantic Segmentation Network [18.457074855823315]
We propose a new quantization framework for training and inference of semantic segmentation networks.
Our framework is evaluated on mainstream semantic segmentation networks like FCN-VGG16 and DeepLabv3-ResNet50.
arXiv Detail & Related papers (2020-11-30T02:07:07Z) - Structured Convolutions for Efficient Neural Network Design [65.36569572213027]
We tackle model efficiency by exploiting redundancy in the textitimplicit structure of the building blocks of convolutional neural networks.
We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers.
arXiv Detail & Related papers (2020-08-06T04:38:38Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.