Padding-free Convolution based on Preservation of Differential
Characteristics of Kernels
- URL: http://arxiv.org/abs/2309.06370v1
- Date: Tue, 12 Sep 2023 16:36:12 GMT
- Title: Padding-free Convolution based on Preservation of Differential
Characteristics of Kernels
- Authors: Kuangdai Leng and Jeyan Thiyagalingam
- Abstract summary: We present a non-padding-based method for size-keeping convolution based on the preservation of differential characteristics of kernels.
The main idea is to make convolution over an incomplete sliding window "collapse" to a linear differential operator evaluated locally at its central pixel.
- Score: 1.3597551064547502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolution is a fundamental operation in image processing and machine
learning. Aimed primarily at maintaining image size, padding is a key
ingredient of convolution, which, however, can introduce undesirable boundary
effects. We present a non-padding-based method for size-keeping convolution
based on the preservation of differential characteristics of kernels. The main
idea is to make convolution over an incomplete sliding window "collapse" to a
linear differential operator evaluated locally at its central pixel, which no
longer requires information from the neighbouring missing pixels. While the
underlying theory is rigorous, our final formula turns out to be simple: the
convolution over an incomplete window is achieved by convolving its nearest
complete window with a transformed kernel. This formula is computationally
lightweight, involving neither interpolation or extrapolation nor restrictions
on image and kernel sizes. Our method favours data with smooth boundaries, such
as high-resolution images and fields from physics. Our experiments include: i)
filtering analytical and non-analytical fields from computational physics and,
ii) training convolutional neural networks (CNNs) for the tasks of image
classification, semantic segmentation and super-resolution reconstruction. In
all these experiments, our method has exhibited visible superiority over the
compared ones.
Related papers
- Self-Supervised Single-Image Deconvolution with Siamese Neural Networks [6.138671548064356]
Inverse problems in image reconstruction are fundamentally complicated by unknown noise properties.
Deep learning methods allow for flexible parametrization of the noise and learning its properties directly from the data.
We tackle this problem with Fast Fourier Transform convolutions that provide training speed-up in 3D deconvolution tasks.
arXiv Detail & Related papers (2023-08-18T09:51:11Z) - Progressive Random Convolutions for Single Domain Generalization [23.07924668615951]
Single domain generalization aims to train a generalizable model with only one source domain to perform well on arbitrary unseen target domains.
Image augmentation based on Random Convolutions (RandConv) enables the model to learn generalizable visual representations by distorting local textures.
We propose a Progressive Random Convolution (Pro-RandConv) method that stacks random convolution layers with a small kernel size instead of increasing the kernel size.
arXiv Detail & Related papers (2023-04-02T01:42:51Z) - Fast and High-Quality Image Denoising via Malleable Convolutions [72.18723834537494]
We present Malleable Convolution (MalleConv), as an efficient variant of dynamic convolution.
Unlike previous works, MalleConv generates a much smaller set of spatially-varying kernels from input.
We also build an efficient denoising network using MalleConv, coined as MalleNet.
arXiv Detail & Related papers (2022-01-02T18:35:20Z) - Neural Fields as Learnable Kernels for 3D Reconstruction [101.54431372685018]
We present a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression.
Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points.
arXiv Detail & Related papers (2021-11-26T18:59:04Z) - Learning with convolution and pooling operations in kernel methods [8.528384027684192]
Recent empirical work has shown that hierarchical convolutional kernels improve the performance of kernel methods in image classification tasks.
We study the precise interplay between approximation and generalization in convolutional architectures.
Our results quantify how choosing an architecture adapted to the target function leads to a large improvement in the sample complexity.
arXiv Detail & Related papers (2021-11-16T09:00:44Z) - Mutual Affine Network for Spatially Variant Kernel Estimation in Blind
Image Super-Resolution [130.32026819172256]
Existing blind image super-resolution (SR) methods mostly assume blur kernels are spatially invariant across the whole image.
This paper proposes a mutual affine network (MANet) for spatially variant kernel estimation.
arXiv Detail & Related papers (2021-08-11T16:11:17Z) - Content-Aware Convolutional Neural Networks [98.97634685964819]
Convolutional Neural Networks (CNNs) have achieved great success due to the powerful feature learning ability of convolution layers.
We propose a Content-aware Convolution (CAC) that automatically detects the smooth windows and applies a 1x1 convolutional kernel to replace the original large kernel.
arXiv Detail & Related papers (2021-06-30T03:54:35Z) - X-volution: On the unification of convolution and self-attention [52.80459687846842]
We propose a multi-branch elementary module composed of both convolution and self-attention operation.
The proposed X-volution achieves highly competitive visual understanding improvements.
arXiv Detail & Related papers (2021-06-04T04:32:02Z) - DO-Conv: Depthwise Over-parameterized Convolutional Layer [66.46704754669169]
We propose to augment a convolutional layer with an additional depthwise convolution, where each input channel is convolved with a different 2D kernel.
We show with extensive experiments that the mere replacement of conventional convolutional layers with DO-Conv layers boosts the performance of CNNs.
arXiv Detail & Related papers (2020-06-22T06:57:10Z) - Adaptive Fractional Dilated Convolution Network for Image Aesthetics
Assessment [33.945579916184364]
An adaptive fractional dilated convolution (AFDC) is developed to tackle this issue in convolutional kernel level.
We provide a concise formulation for mini-batch training and utilize a grouping strategy to reduce computational overhead.
Our experimental results demonstrate that our proposed method achieves state-of-the-art performance on image aesthetics assessment over the AVA dataset.
arXiv Detail & Related papers (2020-04-06T21:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.