Efficient Spatially-Variant Convolution via Differentiable Sparse Kernel Complex
- URL: http://arxiv.org/abs/2512.04556v1
- Date: Thu, 04 Dec 2025 08:20:07 GMT
- Title: Efficient Spatially-Variant Convolution via Differentiable Sparse Kernel Complex
- Authors: Zhizhen Wu, Zhe Cao, Yuchi Huo,
- Abstract summary: Image with complex kernels is a fundamental operation in photography, scientific imaging, animation effects, yet direct dense convolution is on on.<n>Our approach provides a solution for mobile imaging for real-time.
- Score: 15.919010246189709
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image convolution with complex kernels is a fundamental operation in photography, scientific imaging, and animation effects, yet direct dense convolution is computationally prohibitive on resource-limited devices. Existing approximations, such as simulated annealing or low-rank decompositions, either lack efficiency or fail to capture non-convex kernels. We introduce a differentiable kernel decomposition framework that represents a target spatially-variant, dense, complex kernel using a set of sparse kernel samples. Our approach features (i) a decomposition that enables differentiable optimization of sparse kernels, (ii) a dedicated initialization strategy for non-convex shapes to avoid poor local minima, and (iii) a kernel-space interpolation scheme that extends single-kernel filtering to spatially varying filtering without retraining and additional runtime overhead. Experiments on Gaussian and non-convex kernels show that our method achieves higher fidelity than simulated annealing and significantly lower cost than low-rank decompositions. Our approach provides a practical solution for mobile imaging and real-time rendering, while remaining fully differentiable for integration into broader learning pipelines.
Related papers
- KROM: Kernelized Reduced Order Modeling [3.988493458010939]
KROM formulates PDE solution as a minimum-norm (Gaussian-process) recovery problem in an RKHS.<n>A central ingredient is an empirical kernel constructed from a snapshot library of PDE solutions.
arXiv Detail & Related papers (2026-02-27T22:52:22Z) - Parallel Diffusion Solver via Residual Dirichlet Policy Optimization [88.7827307535107]
Diffusion models (DMs) have achieved state-of-the-art generative performance but suffer from high sampling latency due to their sequential denoising nature.<n>Existing solver-based acceleration methods often face significant image quality degradation under a low-dimensional budget.<n>We propose the Ensemble Parallel Direction solver (dubbed as EPD-EPr), a novel ODE solver that mitigates these errors by incorporating multiple gradient parallel evaluations in each step.
arXiv Detail & Related papers (2025-12-28T05:48:55Z) - Fast Isotropic Median Filtering [0.0]
Median filtering is a cornerstone of computational image processing.<n>Our method operates efficiently on arbitrary bit-depth data, arbitrary kernel sizes, and arbitrary convex kernel shapes.
arXiv Detail & Related papers (2025-05-28T23:38:21Z) - Padding-free Convolution based on Preservation of Differential
Characteristics of Kernels [1.3597551064547502]
We present a non-padding-based method for size-keeping convolution based on the preservation of differential characteristics of kernels.
The main idea is to make convolution over an incomplete sliding window "collapse" to a linear differential operator evaluated locally at its central pixel.
arXiv Detail & Related papers (2023-09-12T16:36:12Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - Self-supervised learning with rotation-invariant kernels [4.059849656394191]
We propose a general kernel framework to design a generic regularization loss that promotes the embedding distribution to be close to the uniform distribution on the hypersphere.
Our framework uses rotation-invariant kernels defined on the hypersphere, also known as dot-product kernels.
Our experiments demonstrate that using a truncated rotation-invariant kernel provides competitive results compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-07-28T08:06:24Z) - Learning with convolution and pooling operations in kernel methods [8.528384027684192]
Recent empirical work has shown that hierarchical convolutional kernels improve the performance of kernel methods in image classification tasks.
We study the precise interplay between approximation and generalization in convolutional architectures.
Our results quantify how choosing an architecture adapted to the target function leads to a large improvement in the sample complexity.
arXiv Detail & Related papers (2021-11-16T09:00:44Z) - Mutual Affine Network for Spatially Variant Kernel Estimation in Blind
Image Super-Resolution [130.32026819172256]
Existing blind image super-resolution (SR) methods mostly assume blur kernels are spatially invariant across the whole image.
This paper proposes a mutual affine network (MANet) for spatially variant kernel estimation.
arXiv Detail & Related papers (2021-08-11T16:11:17Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.