End-to-end Interpretable Learning of Non-blind Image Deblurring
- URL: http://arxiv.org/abs/2007.01769v2
- Date: Tue, 15 Sep 2020 14:44:59 GMT
- Title: End-to-end Interpretable Learning of Non-blind Image Deblurring
- Authors: Thomas Eboli, Jian Sun, Jean Ponce
- Abstract summary: Non-blind image deblurring is typically formulated as a linear least-squares problem regularized by natural priors on the corresponding sharp picture's gradients.
We propose to precondition the Richardson solver using approximate inverse filters of the (known) blur and natural image prior kernels.
- Score: 102.75982704671029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-blind image deblurring is typically formulated as a linear least-squares
problem regularized by natural priors on the corresponding sharp picture's
gradients, which can be solved, for example, using a half-quadratic splitting
method with Richardson fixed-point iterations for its least-squares updates and
a proximal operator for the auxiliary variable updates. We propose to
precondition the Richardson solver using approximate inverse filters of the
(known) blur and natural image prior kernels. Using convolutions instead of a
generic linear preconditioner allows extremely efficient parameter sharing
across the image, and leads to significant gains in accuracy and/or speed
compared to classical FFT and conjugate-gradient methods. More importantly, the
proposed architecture is easily adapted to learning both the preconditioner and
the proximal operator using CNN embeddings. This yields a simple and efficient
algorithm for non-blind image deblurring which is fully interpretable, can be
learned end to end, and whose accuracy matches or exceeds the state of the art,
quite significantly, in the non-uniform case.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Variational Bayes image restoration with compressive autoencoders [4.879530644978008]
Regularization of inverse problems is of paramount importance in computational imaging.
In this work, we first propose to use compressive autoencoders instead of state-of-the-art generative models.
As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm.
arXiv Detail & Related papers (2023-11-29T15:49:31Z) - Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring [48.80983873199214]
We develop a data-driven approach to model the saturated pixels by a learned latent map.
Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior (MAP) problem.
To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network.
arXiv Detail & Related papers (2023-08-10T12:53:30Z) - Point spread function estimation for blind image deblurring problems
based on framelet transform [0.0]
The approximation of the image that has been lost due to the blurring process is an important issue in image processing.
The second type of problem is more complex in terms of calculations than the first problems due to the unknown of original image and point spread function estimation.
An algorithm based on coarse-to-fine iterative by $l_0-alpha l_1$ regularization and framelet transform is introduced to approximate the spread function estimation.
The proposed method is investigated on different kinds of images such as text, face, natural.
arXiv Detail & Related papers (2021-12-21T06:15:37Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.