KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution
- URL: http://arxiv.org/abs/2308.08361v1
- Date: Wed, 16 Aug 2023 13:35:09 GMT
- Title: KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution
- Authors: Chao Li, Anbang Yao
- Abstract summary: Dynamic convolution learns a linear mixture of $n$ static kernels weighted with their sample-dependent attentions.
Existing designs are parameter-inefficient: they increase the number of convolutional parameters by $n$ times.
We propose $ KernelWarehouse, which can strike a favorable trade-off between parameter efficiency and representation power.
- Score: 19.021411176761738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic convolution learns a linear mixture of $n$ static kernels weighted
with their sample-dependent attentions, demonstrating superior performance
compared to normal convolution. However, existing designs are
parameter-inefficient: they increase the number of convolutional parameters by
$n$ times. This and the optimization difficulty lead to no research progress in
dynamic convolution that can allow us to use a significant large value of $n$
(e.g., $n>100$ instead of typical setting $n<10$) to push forward the
performance boundary. In this paper, we propose $KernelWarehouse$, a more
general form of dynamic convolution, which can strike a favorable trade-off
between parameter efficiency and representation power. Its key idea is to
redefine the basic concepts of "$kernels$" and "$assembling$ $kernels$" in
dynamic convolution from the perspective of reducing kernel dimension and
increasing kernel number significantly. In principle, KernelWarehouse enhances
convolutional parameter dependencies within the same layer and across
successive layers via tactful kernel partition and warehouse sharing, yielding
a high degree of freedom to fit a desired parameter budget. We validate our
method on ImageNet and MS-COCO datasets with different ConvNet architectures,
and show that it attains state-of-the-art results. For instance, the
ResNet18|ResNet50|MobileNetV2|ConvNeXt-Tiny model trained with KernelWarehouse
on ImageNet reaches 76.05%|81.05%|75.52%|82.51% top-1 accuracy. Thanks to its
flexible design, KernelWarehouse can even reduce the model size of a ConvNet
while improving the accuracy, e.g., our ResNet18 model with 36.45%|65.10%
parameter reduction to the baseline shows 2.89%|2.29% absolute improvement to
top-1 accuracy.
Related papers
- KernelWarehouse: Rethinking the Design of Dynamic Convolution [16.101179962553385]
KernelWarehouse redefines the basic concepts of Kernels", assembling kernels" and attention function"
We testify the effectiveness of KernelWarehouse on ImageNet and MS-COCO datasets using various ConvNet architectures.
arXiv Detail & Related papers (2024-06-12T05:16:26Z) - PeLK: Parameter-efficient Large Kernel ConvNets with Peripheral Convolution [35.1473732030645]
Inspired by human vision, we propose a human-like peripheral convolution that efficiently reduces over 90% parameter count of dense grid convolution.
Our peripheral convolution behaves highly similar to human, reducing the complexity of convolution from O(K2) to O(logK) without backfiring performance.
For the first time, we successfully scale up the kernel size of CNNs to an unprecedented 101x101 and demonstrate consistent improvements.
arXiv Detail & Related papers (2024-03-12T12:19:05Z) - Fully $1\times1$ Convolutional Network for Lightweight Image
Super-Resolution [79.04007257606862]
Deep models have significant process on single image super-resolution (SISR) tasks, in particular large models with large kernel ($3times3$ or more)
$1times1$ convolutions bring substantial computational efficiency, but struggle with aggregating local spatial representations.
We propose a simple yet effective fully $1times1$ convolutional network, named Shift-Conv-based Network (SCNet)
arXiv Detail & Related papers (2023-07-30T06:24:03Z) - Scaling Up 3D Kernels with Bayesian Frequency Re-parameterization for
Medical Image Segmentation [25.62587471067468]
RepUX-Net is a pure CNN architecture with a simple large kernel block design.
Inspired by the spatial frequency in the human visual system, we extend to vary the kernel convergence into element-wise setting.
arXiv Detail & Related papers (2023-03-10T08:38:34Z) - PAD-Net: An Efficient Framework for Dynamic Networks [72.85480289152719]
Common practice in implementing dynamic networks is to convert the given static layers into fully dynamic ones.
We propose a partially dynamic network, namely PAD-Net, to transform the redundant dynamic parameters into static ones.
Our method is comprehensively supported by large-scale experiments with two typical advanced dynamic architectures.
arXiv Detail & Related papers (2022-11-10T12:42:43Z) - Efficient CNN Architecture Design Guided by Visualization [13.074652653088584]
VGNetG-1.0MP achieves 67.7% top-1 accuracy with 0.99M parameters and 69.2% top-1 accuracy with 1.14M parameters on ImageNet classification dataset.
Our VGNetF-1.5MP archives 64.4%(-3.2%) top-1 accuracy and 66.2%(-1.4%) top-1 accuracy with additional Gaussian kernels.
arXiv Detail & Related papers (2022-07-21T06:22:15Z) - Fast and High-Quality Image Denoising via Malleable Convolutions [72.18723834537494]
We present Malleable Convolution (MalleConv), as an efficient variant of dynamic convolution.
Unlike previous works, MalleConv generates a much smaller set of spatially-varying kernels from input.
We also build an efficient denoising network using MalleConv, coined as MalleNet.
arXiv Detail & Related papers (2022-01-02T18:35:20Z) - Content-Aware Convolutional Neural Networks [98.97634685964819]
Convolutional Neural Networks (CNNs) have achieved great success due to the powerful feature learning ability of convolution layers.
We propose a Content-aware Convolution (CAC) that automatically detects the smooth windows and applies a 1x1 convolutional kernel to replace the original large kernel.
arXiv Detail & Related papers (2021-06-30T03:54:35Z) - DyNet: Dynamic Convolution for Accelerating Convolutional Neural
Networks [16.169176006544436]
We propose a novel dynamic convolution method to adaptively generate convolution kernels based on image contents.
Based on the architecture MobileNetV3-Small/Large, DyNet achieves 70.3/77.1% Top-1 accuracy on ImageNet with an improvement of 2.9/1.9%.
arXiv Detail & Related papers (2020-04-22T16:58:05Z) - Kernel Quantization for Efficient Network Compression [59.55192551370948]
Kernel Quantization (KQ) aims to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version without significant performance loss.
Inspired by the evolution from weight pruning to filter pruning, we propose to quantize in both kernel and weight level.
Experiments on the ImageNet classification task prove that KQ needs 1.05 and 1.62 bits on average in VGG and ResNet18, respectively, to represent each parameter in the convolution layer.
arXiv Detail & Related papers (2020-03-11T08:00:04Z) - XSepConv: Extremely Separated Convolution [60.90871656244126]
We propose a novel extremely separated convolutional block (XSepConv)
It fuses spatially separable convolutions into depthwise convolution to reduce both the computational cost and parameter size of large kernels.
XSepConv is designed to be an efficient alternative to vanilla depthwise convolution with large kernel sizes.
arXiv Detail & Related papers (2020-02-27T11:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.