LDConv: Linear deformable convolution for improving convolutional neural networks
- URL: http://arxiv.org/abs/2311.11587v3
- Date: Mon, 22 Jul 2024 13:46:46 GMT
- Title: LDConv: Linear deformable convolution for improving convolutional neural networks
- Authors: Xin Zhang, Yingze Song, Tingting Song, Degang Yang, Yichen Ye, Jie Zhou, Liming Zhang,
- Abstract summary: Linear Deformable Convolution (LDConv) is a plug-and-play convolutional operation that can replace the convolutional operation to improve network performance.
LDConv corrects the growth trend of the number of parameters for standard convolution and Deformable Conv to a linear growth.
- Score: 18.814748446649627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks based on convolutional operations have achieved remarkable results in the field of deep learning, but there are two inherent flaws in standard convolutional operations. On the one hand, the convolution operation is confined to a local window, so it cannot capture information from other locations, and its sampled shapes is fixed. On the other hand, the size of the convolutional kernel are fixed to k $\times$ k, which is a fixed square shape, and the number of parameters tends to grow squarely with size. Although Deformable Convolution (Deformable Conv) address the problem of fixed sampling of standard convolutions, the number of parameters also tends to grow in a squared manner. In response to the above questions, the Linear Deformable Convolution (LDConv) is explored in this work, which gives the convolution kernel an arbitrary number of parameters and arbitrary sampled shapes to provide richer options for the trade-off between network overhead and performance. In LDConv, a novel coordinate generation algorithm is defined to generate different initial sampled positions for convolutional kernels of arbitrary size. To adapt to changing targets, offsets are introduced to adjust the shape of the samples at each position. LDConv corrects the growth trend of the number of parameters for standard convolution and Deformable Conv to a linear growth. Moreover, it completes the process of efficient feature extraction by irregular convolutional operations and brings more exploration options for convolutional sampled shapes. Object detection experiments on representative datasets COCO2017, VOC 7+12, and VisDrone-DET2021 fully demonstrate the advantages of LDConv. LDConv is a plug-and-play convolutional operation that can replace the convolutional operation to improve network performance. The code for the relevant tasks can be found at https://github.com/CV-ZhangXin/LDConv.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - An Improved Normed-Deformable Convolution for Crowd Counting [70.02434289611566]
Deformable convolution is proposed to exploit the scale-adaptive capabilities for CNN features in the heads.
An improved Normed-Deformable Convolution (textiti.e.,NDConv) is proposed in this paper.
Our method outperforms state-of-the-art methods on ShanghaiTech A, ShanghaiTech B, UCF_QNRF, and UCF_CC_50 dataset.
arXiv Detail & Related papers (2022-06-16T10:56:26Z) - OneDConv: Generalized Convolution For Transform-Invariant Representation [76.15687106423859]
We propose a novel generalized one dimension convolutional operator (OneDConv)
It dynamically transforms the convolution kernels based on the input features in a computationally and parametrically efficient manner.
It improves the robustness and generalization of convolution without sacrificing the performance on common images.
arXiv Detail & Related papers (2022-01-15T07:44:44Z) - Dilated convolution with learnable spacings [6.6389732792316005]
CNNs need receptive fields (RF) to compete with visual transformers.
RFs can simply be enlarged by increasing the convolution kernel sizes.
The number of trainable parameters, which scales quadratically with the kernel's size in the 2D case, rapidly becomes prohibitive.
This paper presents a new method to increase the RF size without increasing the number of parameters.
arXiv Detail & Related papers (2021-12-07T14:54:24Z) - Group Shift Pointwise Convolution for Volumetric Medical Image
Segmentation [31.72090839643412]
We introduce a novel Group Shift Pointwise Convolution (GSP-Conv) to improve the effectiveness and efficiency of 3D convolutions.
GSP-Conv simplifies 3D convolutions into pointwise ones with 1x1x1 kernels, which dramatically reduces the number of model parameters and FLOPs.
Results show that our method, with substantially decreased model complexity, achieves comparable or even better performance than models employing 3D convolutions.
arXiv Detail & Related papers (2021-09-26T15:27:33Z) - Dynamic Convolution for 3D Point Cloud Instance Segmentation [146.7971476424351]
We propose an approach to instance segmentation from 3D point clouds based on dynamic convolution.
We gather homogeneous points that have identical semantic categories and close votes for the geometric centroids.
The proposed approach is proposal-free, and instead exploits a convolution process that adapts to the spatial and semantic characteristics of each instance.
arXiv Detail & Related papers (2021-07-18T09:05:16Z) - PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale
Convolutional Layer [76.44375136492827]
Convolutional Neural Networks (CNNs) are often scale-sensitive.
We bridge this regret by exploiting multi-scale features in a finer granularity.
The proposed convolution operation, named Poly-Scale Convolution (PSConv), mixes up a spectrum of dilation rates.
arXiv Detail & Related papers (2020-07-13T05:14:11Z) - DO-Conv: Depthwise Over-parameterized Convolutional Layer [66.46704754669169]
We propose to augment a convolutional layer with an additional depthwise convolution, where each input channel is convolved with a different 2D kernel.
We show with extensive experiments that the mere replacement of conventional convolutional layers with DO-Conv layers boosts the performance of CNNs.
arXiv Detail & Related papers (2020-06-22T06:57:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.