Flow-based Kernel Prior with Application to Blind Super-Resolution
- URL: http://arxiv.org/abs/2103.15977v1
- Date: Mon, 29 Mar 2021 22:37:06 GMT
- Title: Flow-based Kernel Prior with Application to Blind Super-Resolution
- Authors: Jingyun Liang, Kai Zhang, Shuhang Gu, Luc Van Gool, Radu Timofte
- Abstract summary: Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
- Score: 143.21527713002354
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Kernel estimation is generally one of the key problems for blind image
super-resolution (SR). Recently, Double-DIP proposes to model the kernel via a
network architecture prior, while KernelGAN employs the deep linear network and
several regularization losses to constrain the kernel space. However, they fail
to fully exploit the general SR kernel assumption that anisotropic Gaussian
kernels are sufficient for image SR. To address this issue, this paper proposes
a normalizing flow-based kernel prior (FKP) for kernel modeling. By learning an
invertible mapping between the anisotropic Gaussian kernel distribution and a
tractable latent distribution, FKP can be easily used to replace the kernel
modeling modules of Double-DIP and KernelGAN. Specifically, FKP optimizes the
kernel in the latent space rather than the network parameter space, which
allows it to generate reasonable kernel initialization, traverse the learned
kernel manifold and improve the optimization stability. Extensive experiments
on synthetic and real-world images demonstrate that the proposed FKP can
significantly improve the kernel estimation accuracy with less parameters,
runtime and memory usage, leading to state-of-the-art blind SR results.
Related papers
- An Exact Kernel Equivalence for Finite Classification Models [1.4777718769290527]
We compare our exact representation to the well-known Neural Tangent Kernel (NTK) and discuss approximation error relative to the NTK.
We use this exact kernel to show that our theoretical contribution can provide useful insights into the predictions made by neural networks.
arXiv Detail & Related papers (2023-08-01T20:22:53Z) - Self-supervised learning with rotation-invariant kernels [4.059849656394191]
We propose a general kernel framework to design a generic regularization loss that promotes the embedding distribution to be close to the uniform distribution on the hypersphere.
Our framework uses rotation-invariant kernels defined on the hypersphere, also known as dot-product kernels.
Our experiments demonstrate that using a truncated rotation-invariant kernel provides competitive results compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-07-28T08:06:24Z) - Deep Constrained Least Squares for Blind Image Super-Resolution [36.71106982590893]
We tackle the problem of blind image super-resolution(SR) with a reformulated degradation model and two novel modules.
To be more specific, we first reformulate the degradation model such that the deblurring kernel estimation can be transferred into the low resolution space.
Our experiments demonstrate that the proposed method achieves better accuracy and visual improvements against state-of-the-art methods.
arXiv Detail & Related papers (2022-02-15T15:32:11Z) - Learning with convolution and pooling operations in kernel methods [8.528384027684192]
Recent empirical work has shown that hierarchical convolutional kernels improve the performance of kernel methods in image classification tasks.
We study the precise interplay between approximation and generalization in convolutional architectures.
Our results quantify how choosing an architecture adapted to the target function leads to a large improvement in the sample complexity.
arXiv Detail & Related papers (2021-11-16T09:00:44Z) - Deep Kernel Representation for Image Reconstruction in PET [9.041102353158065]
A deep kernel method is proposed by exploiting deep neural networks to enable an automated learning of an optimized kernel model.
The results from computer simulations and a real patient dataset demonstrate that the proposed deep kernel method can outperform existing kernel method and neural network method for dynamic PET image reconstruction.
arXiv Detail & Related papers (2021-10-04T03:53:33Z) - Mutual Affine Network for Spatially Variant Kernel Estimation in Blind
Image Super-Resolution [130.32026819172256]
Existing blind image super-resolution (SR) methods mostly assume blur kernels are spatially invariant across the whole image.
This paper proposes a mutual affine network (MANet) for spatially variant kernel estimation.
arXiv Detail & Related papers (2021-08-11T16:11:17Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Finite Versus Infinite Neural Networks: an Empirical Study [69.07049353209463]
kernel methods outperform fully-connected finite-width networks.
Centered and ensembled finite networks have reduced posterior variance.
Weight decay and the use of a large learning rate break the correspondence between finite and infinite networks.
arXiv Detail & Related papers (2020-07-31T01:57:47Z) - Optimal Rates for Averaged Stochastic Gradient Descent under Neural
Tangent Kernel Regime [50.510421854168065]
We show that the averaged gradient descent can achieve the minimax optimal convergence rate.
We show that the target function specified by the NTK of a ReLU network can be learned at the optimal convergence rate.
arXiv Detail & Related papers (2020-06-22T14:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.