Optimal Density Functions for Weighted Convolution in Learning Models
- URL: http://arxiv.org/abs/2505.24527v1
- Date: Fri, 30 May 2025 12:36:36 GMT
- Title: Optimal Density Functions for Weighted Convolution in Learning Models
- Authors: Simone Cammarasana, Giuseppe Patanè,
- Abstract summary: The paper introduces the weighted convolution, a novel approach to the convolution for signals defined on regular grids.<n>The weighted convolution can be applied to convolutional neural network problems to improve the approximation accuracy.<n>Future work will apply the weighted convolution to real-case 2D and 3D image convolutional learning problems.
- Score: 6.6942213231641805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper introduces the weighted convolution, a novel approach to the convolution for signals defined on regular grids (e.g., 2D images) through the application of an optimal density function to scale the contribution of neighbouring pixels based on their distance from the central pixel. This choice differs from the traditional uniform convolution, which treats all neighbouring pixels equally. Our weighted convolution can be applied to convolutional neural network problems to improve the approximation accuracy. Given a convolutional network, we define a framework to compute the optimal density function through a minimisation model. The framework separates the optimisation of the convolutional kernel weights (using stochastic gradient descent) from the optimisation of the density function (using DIRECT-L). Experimental results on a learning model for an image-to-image task (e.g., image denoising) show that the weighted convolution significantly reduces the loss (up to 53% improvement) and increases the test accuracy compared to standard convolution. While this method increases execution time by 11%, it is robust across several hyperparameters of the learning model. Future work will apply the weighted convolution to real-case 2D and 3D image convolutional learning problems.
Related papers
- Optimal Weighted Convolution for Classification and Denosing [6.6942213231641805]
We introduce a novel weighted convolution operator that enhances traditional convolutional neural networks (CNNs)<n>This extension enables the network to differentially weight neighbouring pixels based on their relative position to the reference pixel.<n>Although developed for 2D image data, the framework is generalisable to signals on regular grids of arbitrary dimensions.
arXiv Detail & Related papers (2025-05-30T13:10:46Z) - Geometric Algebra Planes: Convex Implicit Neural Volumes [70.12234371845445]
We show that GA-Planes is equivalent to a sparse low-rank factor plus low-resolution matrix.
We also show that GA-Planes can be adapted for many existing representations.
arXiv Detail & Related papers (2024-11-20T18:21:58Z) - Spatially Optimized Compact Deep Metric Learning Model for Similarity Search [1.0015171648915433]
Similarity search is a crucial task where spatial features decide an important output.
This study demonstrates that utilizing a single layer of involution feature extractor alongside a compact convolution model significantly enhances the performance of similarity search.
arXiv Detail & Related papers (2024-04-09T19:49:01Z) - LDConv: Linear deformable convolution for improving convolutional neural networks [18.814748446649627]
Linear Deformable Convolution (LDConv) is a plug-and-play convolutional operation that can replace the convolutional operation to improve network performance.
LDConv corrects the growth trend of the number of parameters for standard convolution and Deformable Conv to a linear growth.
arXiv Detail & Related papers (2023-11-20T07:54:54Z) - Self-Supervised Single-Image Deconvolution with Siamese Neural Networks [6.138671548064356]
Inverse problems in image reconstruction are fundamentally complicated by unknown noise properties.
Deep learning methods allow for flexible parametrization of the noise and learning its properties directly from the data.
We tackle this problem with Fast Fourier Transform convolutions that provide training speed-up in 3D deconvolution tasks.
arXiv Detail & Related papers (2023-08-18T09:51:11Z) - IKOL: Inverse kinematics optimization layer for 3D human pose and shape
estimation via Gauss-Newton differentiation [44.00115413716392]
This paper presents an inverse kinematic layer (IKOL) for 3D human pose shape estimation.
IKOL has a much over over than most existing regression-based methods.
It provides a more accurate range of 3D human pose estimation.
arXiv Detail & Related papers (2023-02-02T12:43:29Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Content-Aware Convolutional Neural Networks [98.97634685964819]
Convolutional Neural Networks (CNNs) have achieved great success due to the powerful feature learning ability of convolution layers.
We propose a Content-aware Convolution (CAC) that automatically detects the smooth windows and applies a 1x1 convolutional kernel to replace the original large kernel.
arXiv Detail & Related papers (2021-06-30T03:54:35Z) - Displacement-Invariant Cost Computation for Efficient Stereo Matching [122.94051630000934]
Deep learning methods have dominated stereo matching leaderboards by yielding unprecedented disparity accuracy.
But their inference time is typically slow, on the order of seconds for a pair of 540p images.
We propose a emphdisplacement-invariant cost module to compute the matching costs without needing a 4D feature volume.
arXiv Detail & Related papers (2020-12-01T23:58:16Z) - Human Body Model Fitting by Learned Gradient Descent [48.79414884222403]
We propose a novel algorithm for the fitting of 3D human shape to images.
We show that this algorithm is fast (avg. 120ms convergence), robust to dataset, and achieves state-of-the-art results on public evaluation datasets.
arXiv Detail & Related papers (2020-08-19T14:26:47Z) - Locally Masked Convolution for Autoregressive Models [107.4635841204146]
LMConv is a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image.
We learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation.
arXiv Detail & Related papers (2020-06-22T17:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.