Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement
- URL: http://arxiv.org/abs/2103.10621v2
- Date: Mon, 22 Mar 2021 03:35:49 GMT
- Title: Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement
- Authors: Kui Jiang, Zhongyuan Wang, Zheng Wang, Peng Yi, Xiao Wang, Yansheng
Qiu, Chen Chen, Chia-Wen Lin
- Abstract summary: We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
- Score: 52.49231695707198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement aims to improve an image's visibility while
keeping its visual naturalness. Different from existing methods, which tend to
accomplish the enhancement task directly, we investigate the intrinsic
degradation and relight the low-light image while refining the details and
color in two steps. Inspired by the color image formulation (diffuse
illumination color plus environment illumination color), we first estimate the
degradation from low-light inputs to simulate the distortion of environment
illumination color, and then refine the content to recover the loss of diffuse
illumination color. To this end, we propose a novel Degradation-to-Refinement
Generation Network (DRGN). Its distinctive features can be summarized as 1) A
novel two-step generation network for degradation learning and content
refinement. It is not only superior to one-step methods, but also is capable of
synthesizing sufficient paired samples to benefit the model training; 2) A
multi-resolution fusion network to represent the target information
(degradation or contents) in a multi-scale cooperative manner, which is more
effective to address the complex unmixing problems. Extensive experiments on
both the enhancement task and the joint detection task have verified the
effectiveness and efficiency of our proposed method, surpassing the SOTA by
0.95dB in PSNR on LOL1000 dataset and 3.18\% in mAP on ExDark dataset. Our code
is available at \url{https://github.com/kuijiang0802/DRGN}
Related papers
- Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition [4.175396687130961]
We propose a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet.
Our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets.
arXiv Detail & Related papers (2023-11-06T09:57:48Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Division Gets Better: Learning Brightness-Aware and Detail-Sensitive
Representations for Low-Light Image Enhancement [10.899693396348171]
LCDBNet is composed of two branches, namely luminance adjustment network (LAN) and chrominance restoration network (CRN)
LAN takes responsibility for learning brightness-aware features leveraging long-range dependency and local attention correlation.
CRN concentrates on learning detail-sensitive features via multi-level wavelet decomposition.
Finally, a fusion network is designed to blend their learned features to produce visually impressive images.
arXiv Detail & Related papers (2023-07-18T09:52:48Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - TSN-CA: A Two-Stage Network with Channel Attention for Low-Light Image
Enhancement [11.738203047278848]
We propose a Two-Stage Network with Channel Attention (denoted as TSN-CA) to enhance the brightness of the low-light image.
We conduct extensive experiments to demonstrate that our method achieves excellent effect on brightness enhancement as well as denoising, details preservation and halo artifacts elimination.
arXiv Detail & Related papers (2021-10-06T03:20:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.