LEDNet: Joint Low-light Enhancement and Deblurring in the Dark
- URL: http://arxiv.org/abs/2202.03373v1
- Date: Mon, 7 Feb 2022 17:44:05 GMT
- Title: LEDNet: Joint Low-light Enhancement and Deblurring in the Dark
- Authors: Shangchen Zhou, Chongyi Li, Chen Change Loy
- Abstract summary: We present the first large-scale dataset for joint low-light enhancement and deblurring.
LOL-Blur contains 12,000 low-blur/normal-sharp pairs with diverse darkness and motion blurs in different scenarios.
We also present an effective network, named LEDNet, to perform joint low-light enhancement and deblurring.
- Score: 100.24389251273611
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Night photography typically suffers from both low light and blurring issues
due to the dim environment and the common use of long exposure. While existing
light enhancement and deblurring methods could deal with each problem
individually, a cascade of such methods cannot work harmoniously to cope well
with joint degradation of visibility and textures. Training an end-to-end
network is also infeasible as no paired data is available to characterize the
coexistence of low light and blurs. We address the problem by introducing a
novel data synthesis pipeline that models realistic low-light blurring
degradations. With the pipeline, we present the first large-scale dataset for
joint low-light enhancement and deblurring. The dataset, LOL-Blur, contains
12,000 low-blur/normal-sharp pairs with diverse darkness and motion blurs in
different scenarios. We further present an effective network, named LEDNet, to
perform joint low-light enhancement and deblurring. Our network is unique as it
is specially designed to consider the synergy between the two inter-connected
tasks. Both the proposed dataset and network provide a foundation for this
challenging joint task. Extensive experiments demonstrate the effectiveness of
our method on both synthetic and real-world datasets.
Related papers
- You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition [4.175396687130961]
We propose a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet.
Our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets.
arXiv Detail & Related papers (2023-11-06T09:57:48Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions [14.63586364951471]
We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
arXiv Detail & Related papers (2023-04-06T10:05:54Z) - INFWIDE: Image and Feature Space Wiener Deconvolution Network for
Non-blind Image Deblurring in Low-Light Conditions [32.35378513394865]
We propose a novel non-blind deblurring method dubbed image and feature space Wiener deconvolution network (INFWIDE)
INFWIDE removes noise and hallucinates saturated regions in the image space and suppresses ringing artifacts in the feature space.
Experiments on synthetic data and real data demonstrate the superior performance of the proposed approach.
arXiv Detail & Related papers (2022-07-17T15:22:31Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Progressive Joint Low-light Enhancement and Noise Removal for Raw Images [10.778200442212334]
Low-light imaging on mobile devices is typically challenging due to insufficient incident light coming through the relatively small aperture.
We propose a low-light image processing framework that performs joint illumination adjustment, color enhancement, and denoising.
Our framework does not need to recollect massive data when being adapted to another camera model.
arXiv Detail & Related papers (2021-06-28T16:43:52Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.