ReLLIE: Deep Reinforcement Learning for Customized Low-Light Image
Enhancement
- URL: http://arxiv.org/abs/2107.05830v1
- Date: Tue, 13 Jul 2021 03:36:30 GMT
- Title: ReLLIE: Deep Reinforcement Learning for Customized Low-Light Image
Enhancement
- Authors: Rongkai Zhang, Lanqing Guo, Siyu Huang and Bihan Wen
- Abstract summary: Low-light image enhancement (LLIE) is a pervasive yet challenging problem.
This paper presents a novel deep reinforcement learning based method, dubbed ReLLIE, for customized low-light enhancement.
- Score: 21.680891925479195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement (LLIE) is a pervasive yet challenging problem,
since: 1) low-light measurements may vary due to different imaging conditions
in practice; 2) images can be enlightened subjectively according to diverse
preferences by each individual. To tackle these two challenges, this paper
presents a novel deep reinforcement learning based method, dubbed ReLLIE, for
customized low-light enhancement. ReLLIE models LLIE as a markov decision
process, i.e., estimating the pixel-wise image-specific curves sequentially and
recurrently. Given the reward computed from a set of carefully crafted
non-reference loss functions, a lightweight network is proposed to estimate the
curves for enlightening of a low-light image input. As ReLLIE learns a policy
instead of one-one image translation, it can handle various low-light
measurements and provide customized enhanced outputs by flexibly applying the
policy different times. Furthermore, ReLLIE can enhance real-world images with
hybrid corruptions, e.g., noise, by using a plug-and-play denoiser easily.
Extensive experiments on various benchmarks demonstrate the advantages of
ReLLIE, comparing to the state-of-the-art methods.
Related papers
- DPEC: Dual-Path Error Compensation Method for Enhanced Low-Light Image Clarity [2.8161423494191222]
We propose the Dual-Path Error Compensation (DPEC) method to improve image quality under low-light conditions.
DPEC incorporates precise pixel-level error estimation to capture subtle differences and an independent denoising mechanism to prevent noise amplification.
Comprehensive quantitative and qualitative experimental results demonstrate that our algorithm significantly outperforms state-of-the-art methods in low-light image enhancement.
arXiv Detail & Related papers (2024-06-28T08:21:49Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions [14.63586364951471]
We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
arXiv Detail & Related papers (2023-04-06T10:05:54Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.