Blind Image Deconvolution Using Variational Deep Image Prior
- URL: http://arxiv.org/abs/2202.00179v3
- Date: Tue, 6 Jun 2023 02:16:47 GMT
- Title: Blind Image Deconvolution Using Variational Deep Image Prior
- Authors: Dong Huo, Abbas Masoumzadeh, Rafsanjany Kushol, Yee-Hong Yang
- Abstract summary: This paper proposes a new variational deep image prior (VDIP) for blind image deconvolution.
VDIP exploits additive hand-crafted image priors on latent sharp images and approximates a distribution for each pixel to avoid suboptimal solutions.
Experiments show that the generated images have better quality than that of the original DIP on benchmark datasets.
- Score: 4.92175281564179
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Conventional deconvolution methods utilize hand-crafted image priors to
constrain the optimization. While deep-learning-based methods have simplified
the optimization by end-to-end training, they fail to generalize well to blurs
unseen in the training dataset. Thus, training image-specific models is
important for higher generalization. Deep image prior (DIP) provides an
approach to optimize the weights of a randomly initialized network with a
single degraded image by maximum a posteriori (MAP), which shows that the
architecture of a network can serve as the hand-crafted image prior. Different
from the conventional hand-crafted image priors that are statistically
obtained, it is hard to find a proper network architecture because the
relationship between images and their corresponding network architectures is
unclear. As a result, the network architecture cannot provide enough constraint
for the latent sharp image. This paper proposes a new variational deep image
prior (VDIP) for blind image deconvolution, which exploits additive
hand-crafted image priors on latent sharp images and approximates a
distribution for each pixel to avoid suboptimal solutions. Our mathematical
analysis shows that the proposed method can better constrain the optimization.
The experimental results further demonstrate that the generated images have
better quality than that of the original DIP on benchmark datasets. The source
code of our VDIP is available at
https://github.com/Dong-Huo/VDIP-Deconvolution.
Related papers
- Chasing Better Deep Image Priors between Over- and Under-parameterization [63.8954152220162]
We study a novel "lottery image prior" (LIP) by exploiting DNN inherent sparsity.
LIPworks significantly outperform deep decoders under comparably compact model sizes.
We also extend LIP to compressive sensing image reconstruction, where a pre-trained GAN generator is used as the prior.
arXiv Detail & Related papers (2024-10-31T17:49:44Z) - VDIP-TGV: Blind Image Deconvolution via Variational Deep Image Prior
Empowered by Total Generalized Variation [21.291149526862416]
Deep image prior (DIP) proposes to use the deep network as a regularizer for a single image rather than as a supervised model.
In this paper, we combine total generalized variational (TGV) regularization with VDIP to overcome these shortcomings.
The proposed VDIP-TGV effectively recovers image edges and details by supplementing extra gradient information through TGV.
arXiv Detail & Related papers (2023-10-30T12:03:18Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Learning Iterative Neural Optimizers for Image Steganography [29.009110889917856]
In this paper, we argue that image steganography is inherently performed on the (elusive) manifold of natural images.
We train the neural network to stay close to the manifold of natural images throughout the optimization.
In comparison to previous state-of-the-art encoder-decoder-based steganography methods, it reduces the recovery error rate by multiple orders of magnitude.
arXiv Detail & Related papers (2023-03-27T19:17:07Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Blind Image Restoration with Flow Based Priors [19.190289348734215]
In a blind setting with unknown degradations, a good prior remains crucial.
We propose using normalizing flows to model the distribution of the target content and to use this as a prior in a maximum a posteriori (MAP) formulation.
To the best of our knowledge, this is the first work that explores normalizing flows as prior in image enhancement problems.
arXiv Detail & Related papers (2020-09-09T21:40:11Z) - Perceptually Optimizing Deep Image Compression [53.705543593594285]
Mean squared error (MSE) and $ell_p$ norms have largely dominated the measurement of loss in neural networks.
We propose a different proxy approach to optimize image analysis networks against quantitative perceptual models.
arXiv Detail & Related papers (2020-07-03T14:33:28Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.