Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring
- URL: http://arxiv.org/abs/2308.05543v1
- Date: Thu, 10 Aug 2023 12:53:30 GMT
- Title: Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring
- Authors: Liang Chen, Jiawei Zhang, Zhenhua Li, Yunxuan Wei, Faming Fang, Jimmy
Ren, and Jinshan Pan
- Abstract summary: We develop a data-driven approach to model the saturated pixels by a learned latent map.
Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior (MAP) problem.
To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network.
- Score: 48.80983873199214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Images taken under the low-light condition often contain blur and saturated
pixels at the same time. Deblurring images with saturated pixels is quite
challenging. Because of the limited dynamic range, the saturated pixels are
usually clipped in the imaging process and thus cannot be modeled by the linear
blur model. Previous methods use manually designed smooth functions to
approximate the clipping procedure. Their deblurring processes often require
empirically defined parameters, which may not be the optimal choices for
different images. In this paper, we develop a data-driven approach to model the
saturated pixels by a learned latent map. Based on the new model, the non-blind
deblurring task can be formulated into a maximum a posterior (MAP) problem,
which can be effectively solved by iteratively computing the latent map and the
latent image. Specifically, the latent map is computed by learning from a map
estimation network (MEN), and the latent image estimation process is
implemented by a Richardson-Lucy (RL)-based updating scheme. To estimate
high-quality deblurred images without amplified artifacts, we develop a prior
estimation network (PEN) to obtain prior information, which is further
integrated into the RL scheme. Experimental results demonstrate that the
proposed method performs favorably against state-of-the-art algorithms both
quantitatively and qualitatively on synthetic and real-world images.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Self-Supervised Multi-Scale Network for Blind Image Deblurring via Alternating Optimization [12.082424048578753]
We present a self-supervised multi-scale blind image deblurring method to jointly estimate the latent image and the blur kernel.
Thanks to the collaborative estimation across multiple scales, our method avoids the computationally intensive coarse-to-fine propagation and additional image deblurring processes.
arXiv Detail & Related papers (2024-09-02T07:08:17Z) - Multi-Feature Aggregation in Diffusion Models for Enhanced Face Super-Resolution [6.055006354743854]
We develop an algorithm that utilize a low-resolution image combined with features extracted from multiple low-quality images to generate a super-resolved image.
Unlike other algorithms, our approach recovers facial features without explicitly providing attribute information.
This is the first time multi-features combined with low-resolution images are used as conditioners to generate more reliable super-resolution images.
arXiv Detail & Related papers (2024-08-27T20:08:33Z) - PixelPyramids: Exact Inference Models from Lossless Image Pyramids [58.949070311990916]
Pixel-Pyramids is a block-autoregressive approach with scale-specific representations to encode the joint distribution of image pixels.
It yields state-of-the-art results for density estimation on various image datasets, especially for high-resolution data.
For CelebA-HQ 1024 x 1024, we observe that the density estimates are improved to 44% of the baseline despite sampling speeds superior even to easily parallelizable flow-based models.
arXiv Detail & Related papers (2021-10-17T10:47:29Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z) - AcED: Accurate and Edge-consistent Monocular Depth Estimation [0.0]
Single image depth estimation is a challenging problem.
We formulate a fully differentiable ordinal regression and train the network in end-to-end fashion.
A novel per-pixel confidence map computation for depth refinement is also proposed.
arXiv Detail & Related papers (2020-06-16T15:21:00Z) - Deep Blind Video Super-resolution [85.79696784460887]
We propose a deep convolutional neural network (CNN) model to solve video SR by a blur kernel modeling approach.
The proposed CNN model consists of motion blur estimation, motion estimation, and latent image restoration modules.
We show that the proposed algorithm is able to generate clearer images with finer structural details.
arXiv Detail & Related papers (2020-03-10T13:43:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.