Log-Polar Space Convolution for Convolutional Neural Networks
- URL: http://arxiv.org/abs/2107.11943v1
- Date: Mon, 26 Jul 2021 03:41:40 GMT
- Title: Log-Polar Space Convolution for Convolutional Neural Networks
- Authors: Bing Su, Ji-Rong Wen
- Abstract summary: Convolutional neural networks use regular quadrilateral convolution kernels to extract features.
Many popular models use small convolution kernels, resulting in small local receptive fields in lower layers.
This paper proposes a novel log-polar space convolution (LPSC) method, where the convolution kernel is elliptical and adaptively divides its local receptive field into different regions.
- Score: 43.737520152861755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks use regular quadrilateral convolution kernels
to extract features. Since the number of parameters increases quadratically
with the size of the convolution kernel, many popular models use small
convolution kernels, resulting in small local receptive fields in lower layers.
This paper proposes a novel log-polar space convolution (LPSC) method, where
the convolution kernel is elliptical and adaptively divides its local receptive
field into different regions according to the relative directions and
logarithmic distances. The local receptive field grows exponentially with the
number of distance levels. Therefore, the proposed LPSC not only naturally
encodes local spatial structures, but also greatly increases the single-layer
receptive field while maintaining the number of parameters. We show that LPSC
can be implemented with conventional convolution via log-polar space pooling
and can be applied in any network architecture to replace conventional
convolutions. Experiments on different tasks and datasets demonstrate the
effectiveness of the proposed LPSC. Code is available at
https://github.com/BingSu12/Log-Polar-Space-Convolution.
Related papers
- PFGNet: A Fully Convolutional Frequency-Guided Peripheral Gating Network for Efficient Spatiotemporal Predictive Learning [27.26429269735324]
PFGNet is a fully convolutional framework that dynamically modulates receptive fields through pixel-wise frequency-guided gating.<n> PFGNet delivers SOTA or near-SOTA forecasting performance with substantially fewer parameters and FLOPs.
arXiv Detail & Related papers (2026-02-24T04:31:12Z) - GSPN-2: Efficient Parallel Sequence Modeling [101.33780567131716]
Generalized Spatial Propagation Network (GSPN) addresses this by replacing quadratic self-attention with a line-scan propagation scheme.<n>GSPN-2 establishes a new efficiency frontier for modeling global spatial context in vision applications.
arXiv Detail & Related papers (2025-11-28T07:26:45Z) - LipKernel: Lipschitz-Bounded Convolutional Neural Networks via Dissipative Layers [0.0468732641979009]
We propose a layer-wise parameterization for convolutional neural networks (CNNs) that includes built-in robustness guarantees.
Our method Lip Kernel directly parameterizes dissipative convolution kernels using a 2-D Roesser-type state space model.
We show that the run-time using our method is orders of magnitude faster than state-of-the-art Lipschitz-bounded networks.
arXiv Detail & Related papers (2024-10-29T17:20:14Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - LDConv: Linear deformable convolution for improving convolutional neural networks [18.814748446649627]
Linear Deformable Convolution (LDConv) is a plug-and-play convolutional operation that can replace the convolutional operation to improve network performance.
LDConv corrects the growth trend of the number of parameters for standard convolution and Deformable Conv to a linear growth.
arXiv Detail & Related papers (2023-11-20T07:54:54Z) - Dilated convolution with learnable spacings [6.6389732792316005]
CNNs need receptive fields (RF) to compete with visual transformers.
RFs can simply be enlarged by increasing the convolution kernel sizes.
The number of trainable parameters, which scales quadratically with the kernel's size in the 2D case, rapidly becomes prohibitive.
This paper presents a new method to increase the RF size without increasing the number of parameters.
arXiv Detail & Related papers (2021-12-07T14:54:24Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale
Convolutional Layer [76.44375136492827]
Convolutional Neural Networks (CNNs) are often scale-sensitive.
We bridge this regret by exploiting multi-scale features in a finer granularity.
The proposed convolution operation, named Poly-Scale Convolution (PSConv), mixes up a spectrum of dilation rates.
arXiv Detail & Related papers (2020-07-13T05:14:11Z) - DO-Conv: Depthwise Over-parameterized Convolutional Layer [66.46704754669169]
We propose to augment a convolutional layer with an additional depthwise convolution, where each input channel is convolved with a different 2D kernel.
We show with extensive experiments that the mere replacement of conventional convolutional layers with DO-Conv layers boosts the performance of CNNs.
arXiv Detail & Related papers (2020-06-22T06:57:10Z) - Localized convolutional neural networks for geospatial wind forecasting [0.0]
Convolutional Neural Networks (CNN) possess positive qualities when it comes to many spatial data.
In this work, we propose localized convolutional neural networks that enable CNNs to learn local features in addition to the global ones.
They can be added to any convolutional layers, easily end-to-end trained, introduce minimal additional complexity, and let CNNs retain most of their benefits to the extent that they are needed.
arXiv Detail & Related papers (2020-05-12T17:14:49Z) - XSepConv: Extremely Separated Convolution [60.90871656244126]
We propose a novel extremely separated convolutional block (XSepConv)
It fuses spatially separable convolutions into depthwise convolution to reduce both the computational cost and parameter size of large kernels.
XSepConv is designed to be an efficient alternative to vanilla depthwise convolution with large kernel sizes.
arXiv Detail & Related papers (2020-02-27T11:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.