Blind Image Deconvolution by Generative-based Kernel Prior and Initializer via Latent Encoding
- URL: http://arxiv.org/abs/2407.14816v1
- Date: Sat, 20 Jul 2024 09:23:56 GMT
- Title: Blind Image Deconvolution by Generative-based Kernel Prior and Initializer via Latent Encoding
- Authors: Jiangtao Zhang, Zongsheng Yue, Hui Wang, Qian Zhao, Deyu Meng,
- Abstract summary: Blind image encoding is a classic yet challenging problem in the field of image processing.
Recent advances in deep image prior (DBID) have motivated a series of methods that demonstrate a-generative approaches.
We propose a new pre-generative approach that better considers the prior modeling, and a generative approach for blur kernels.
- Score: 46.40894748268764
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Blind image deconvolution (BID) is a classic yet challenging problem in the field of image processing. Recent advances in deep image prior (DIP) have motivated a series of DIP-based approaches, demonstrating remarkable success in BID. However, due to the high non-convexity of the inherent optimization process, these methods are notorious for their sensitivity to the initialized kernel. To alleviate this issue and further improve their performance, we propose a new framework for BID that better considers the prior modeling and the initialization for blur kernels, leveraging a deep generative model. The proposed approach pre-trains a generative adversarial network-based kernel generator that aptly characterizes the kernel priors and a kernel initializer that facilitates a well-informed initialization for the blur kernel through latent space encoding. With the pre-trained kernel generator and initializer, one can obtain a high-quality initialization of the blur kernel, and enable optimization within a compact latent kernel manifold. Such a framework results in an evident performance improvement over existing DIP-based BID methods. Extensive experiments on different datasets demonstrate the effectiveness of the proposed method.
Related papers
- Blind Super-Resolution via Meta-learning and Markov Chain Monte Carlo Simulation [46.5310645609264]
We propose a Meta-learning and Markov Chain Monte Carlo based SISR approach to learn kernel priors from organized randomness.
A lightweight network is adopted as kernel generator, and is optimized via learning from the MCMC simulation on random Gaussian distributions.
A meta-learning-based alternating optimization procedure is proposed to optimize the kernel generator and image restorer.
arXiv Detail & Related papers (2024-06-13T07:50:15Z) - Meta-Learned Kernel For Blind Super-Resolution Kernel Estimation [22.437479940607332]
We introduce a learning-to-learn approach that meta-learns from the information contained in a distribution of images.
We show that our method leads to a faster inference with a speedup between 14.24 to 102.1x over existing methods.
arXiv Detail & Related papers (2022-12-15T15:11:38Z) - Uncertainty-Aware Unsupervised Image Deblurring with Deep Residual Prior [23.417096880297702]
Non-blind deblurring methods achieve decent performance under the accurate blur kernel assumption.
Hand-crafted prior, incorporating domain knowledge, generally performs well but may lead to poor performance when kernel (or induced) error is complex.
Data-driven prior, which excessively depends on the diversity and abundance of training data, is vulnerable to out-of-distribution blurs and images.
We propose an unsupervised semi-blind deblurring model which recovers the latent image from the blurry image and inaccurate blur kernel.
arXiv Detail & Related papers (2022-10-09T11:10:59Z) - Unfolded Deep Kernel Estimation for Blind Image Super-resolution [23.798845090992728]
Blind image super-resolution (BISR) aims to reconstruct a high-resolution image from its low-resolution counterpart degraded by unknown blur kernel and noise.
We propose a novel unfolded deep kernel estimation (UDKE) method, which, for the first time to our best knowledge, explicitly solves the data term with high efficiency.
arXiv Detail & Related papers (2022-03-10T07:54:59Z) - Image-specific Convolutional Kernel Modulation for Single Image
Super-resolution [85.09413241502209]
In this issue, we propose a novel image-specific convolutional modulation kernel (IKM)
We exploit the global contextual information of image or feature to generate an attention weight for adaptively modulating the convolutional kernels.
Experiments on single image super-resolution show that the proposed methods achieve superior performances over state-of-the-art methods.
arXiv Detail & Related papers (2021-11-16T11:05:10Z) - Deep Kernel Representation for Image Reconstruction in PET [9.041102353158065]
A deep kernel method is proposed by exploiting deep neural networks to enable an automated learning of an optimized kernel model.
The results from computer simulations and a real patient dataset demonstrate that the proposed deep kernel method can outperform existing kernel method and neural network method for dynamic PET image reconstruction.
arXiv Detail & Related papers (2021-10-04T03:53:33Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.