FRS-Nets: Fourier Parameterized Rotation and Scale Equivariant Networks
for Retinal Vessel Segmentation
- URL: http://arxiv.org/abs/2309.15638v1
- Date: Wed, 27 Sep 2023 13:14:57 GMT
- Title: FRS-Nets: Fourier Parameterized Rotation and Scale Equivariant Networks
for Retinal Vessel Segmentation
- Authors: Zihong Sun, Qi Xie and Deyu Meng
- Abstract summary: We construct a novel convolution operator (FRS-Conv), which is Fourier parameterized and equivariant to rotation and scaling.
With merely 13.9% parameters of corresponding baselines, FRS-Nets have achieved state-of-the-art performance.
It demonstrates the remarkable accuracy, generalization, and clinical application potential of FRS-Nets.
- Score: 55.4653338610275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With translation equivariance, convolution neural networks (CNNs) have
achieved great success in retinal vessel segmentation. However, some other
symmetries of the vascular morphology are not characterized by CNNs, such as
rotation and scale symmetries. To embed more equivariance into CNNs and achieve
the accuracy requirement for retinal vessel segmentation, we construct a novel
convolution operator (FRS-Conv), which is Fourier parameterized and equivariant
to rotation and scaling. Specifically, we first adopt a new parameterization
scheme, which enables convolutional filters to arbitrarily perform
transformations with high accuracy. Secondly, we derive the formulations for
the rotation and scale equivariant convolution mapping. Finally, we construct
FRS-Conv following the proposed formulations and replace the traditional
convolution filters in U-Net and Iter-Net with FRS-Conv (FRS-Nets). We
faithfully reproduce all compared methods and conduct comprehensive experiments
on three public datasets under both in-dataset and cross-dataset settings. With
merely 13.9% parameters of corresponding baselines, FRS-Nets have achieved
state-of-the-art performance and significantly outperform all compared methods.
It demonstrates the remarkable accuracy, generalization, and clinical
application potential of FRS-Nets.
Related papers
- Sorted Convolutional Network for Achieving Continuous Rotational
Invariance [56.42518353373004]
We propose a Sorting Convolution (SC) inspired by some hand-crafted features of texture images.
SC achieves continuous rotational invariance without requiring additional learnable parameters or data augmentation.
Our results demonstrate that SC achieves the best performance in the aforementioned tasks.
arXiv Detail & Related papers (2023-05-23T18:37:07Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Leveraging SO(3)-steerable convolutions for pose-robust semantic segmentation in 3D medical data [2.207533492015563]
We present a new family of segmentation networks that use equivariant voxel convolutions based on spherical harmonics.
These networks are robust to data poses not seen during training, and do not require rotation-based data augmentation during training.
We demonstrate improved segmentation performance in MRI brain tumor and healthy brain structure segmentation tasks.
arXiv Detail & Related papers (2023-03-01T09:27:08Z) - Empowering Networks With Scale and Rotation Equivariance Using A
Similarity Convolution [16.853711292804476]
We devise a method that endows CNNs with simultaneous equivariance with respect to translation, rotation, and scaling.
Our approach defines a convolution-like operation and ensures equivariance based on our proposed scalable Fourier-Argand representation.
We validate the efficacy of our approach in the image classification task, demonstrating its robustness and the generalization ability to both scaled and rotated inputs.
arXiv Detail & Related papers (2023-03-01T08:43:05Z) - How can spherical CNNs benefit ML-based diffusion MRI parameter
estimation? [2.4417196796959906]
Spherical convolutional neural networks (S-CNN) offer distinct advantages over conventional fully-connected networks (FCN)
Current clinical practice commonly acquires dMRI data consisting of only 6 diffusion weighted images (DWIs)
arXiv Detail & Related papers (2022-07-01T17:49:26Z) - Equivariance versus Augmentation for Spherical Images [0.7388859384645262]
We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images.
We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation.
arXiv Detail & Related papers (2022-02-08T16:49:30Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Implicit Equivariance in Convolutional Networks [1.911678487931003]
Implicitly Equivariant Networks (IEN) induce equivariant in the different layers of a standard CNN model.
We show IEN outperforms the state-of-the-art rotation equivariant tracking method while providing faster inference speed.
arXiv Detail & Related papers (2021-11-28T14:44:17Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z) - ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution [57.635467829558664]
We introduce a structural regularization across convolutional kernels in a CNN.
We show that CNNs now maintain performance with dramatic reduction in parameters and computations.
arXiv Detail & Related papers (2020-09-04T20:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.