Decoupled Low-light Image Enhancement
- URL: http://arxiv.org/abs/2111.14458v1
- Date: Mon, 29 Nov 2021 11:15:38 GMT
- Title: Decoupled Low-light Image Enhancement
- Authors: Shijie Hao, Xu Han, Yanrong Guo, Meng Wang
- Abstract summary: We propose to decouple the enhancement model into two sequential stages.
The first stage focuses on improving the scene visibility based on a pixel-wise non-linear mapping.
The second stage focuses on improving the appearance fidelity by suppressing the rest degeneration factors.
- Score: 21.111831640136835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The visual quality of photographs taken under imperfect lightness conditions
can be degenerated by multiple factors, e.g., low lightness, imaging noise,
color distortion and so on. Current low-light image enhancement models focus on
the improvement of low lightness only, or simply deal with all the degeneration
factors as a whole, therefore leading to a sub-optimal performance. In this
paper, we propose to decouple the enhancement model into two sequential stages.
The first stage focuses on improving the scene visibility based on a pixel-wise
non-linear mapping. The second stage focuses on improving the appearance
fidelity by suppressing the rest degeneration factors. The decoupled model
facilitates the enhancement in two aspects. On the one hand, the whole
low-light enhancement can be divided into two easier subtasks. The first one
only aims to enhance the visibility. It also helps to bridge the large
intensity gap between the low-light and normal-light images. In this way, the
second subtask can be shaped as the local appearance adjustment. On the other
hand, since the parameter matrix learned from the first stage is aware of the
lightness distribution and the scene structure, it can be incorporated into the
second stage as the complementary information. In the experiments, our model
demonstrates the state-of-the-art performance in both qualitative and
quantitative comparisons, compared with other low-light image enhancement
models. In addition, the ablation studies also validate the effectiveness of
our model in multiple aspects, such as model structure and loss function. The
trained model is available at
https://github.com/hanxuhfut/Decoupled-Low-light-Image-Enhancement.
Related papers
- Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - Joint Correcting and Refinement for Balanced Low-Light Image Enhancement [26.399356992450763]
A novel structure is proposed which can balance brightness, color, and illumination more effectively.
Joint Correcting and Refinement Network (JCRNet) mainly consists of three stages to balance brightness, color, and illumination of enhancement.
arXiv Detail & Related papers (2023-09-28T03:16:45Z) - Self-Aligned Concave Curve: Illumination Enhancement for Unsupervised
Adaptation [36.050270650417325]
We propose a learnable illumination enhancement model for high-level vision.
Inspired by real camera response functions, we assume that the illumination enhancement function should be a concave curve.
Our model architecture and training designs mutually benefit each other, forming a powerful unsupervised normal-to-low light adaptation framework.
arXiv Detail & Related papers (2022-10-07T19:32:55Z) - Semi-supervised atmospheric component learning in low-light image
problem [0.0]
Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices.
This study presents a semi-supervised training method using no-reference image quality metrics for low-light image restoration.
arXiv Detail & Related papers (2022-04-15T17:06:33Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - A Two-stage Unsupervised Approach for Low light Image Enhancement [18.365345507072234]
We propose a two-stage unsupervised method that decomposes the low light image enhancement into a pre-enhancement and a post-refinement problem.
Our method can significantly improve feature points matching and simultaneous localization and mapping in low light conditions.
arXiv Detail & Related papers (2020-10-19T08:51:32Z) - L^2UWE: A Framework for the Efficient Enhancement of Low-Light
Underwater Images Using Local Contrast and Multi-Scale Fusion [84.11514688735183]
We present a novel single-image low-light underwater image enhancer, L2UWE, that builds on our observation that an efficient model of atmospheric lighting can be derived from local contrast information.
A multi-scale fusion process is employed to combine these images while emphasizing regions of higher luminance, saliency and local contrast.
We demonstrate the performance of L2UWE by using seven metrics to test it against seven state-of-the-art enhancement methods specific to underwater and low-light scenes.
arXiv Detail & Related papers (2020-05-28T01:57:32Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.