Division Gets Better: Learning Brightness-Aware and Detail-Sensitive
Representations for Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2307.09104v1
- Date: Tue, 18 Jul 2023 09:52:48 GMT
- Title: Division Gets Better: Learning Brightness-Aware and Detail-Sensitive
Representations for Low-Light Image Enhancement
- Authors: Huake Wang, Xiaoyang Yan, Xingsong Hou, Junhui Li, Yujie Dun, Kaibing
Zhang
- Abstract summary: LCDBNet is composed of two branches, namely luminance adjustment network (LAN) and chrominance restoration network (CRN)
LAN takes responsibility for learning brightness-aware features leveraging long-range dependency and local attention correlation.
CRN concentrates on learning detail-sensitive features via multi-level wavelet decomposition.
Finally, a fusion network is designed to blend their learned features to produce visually impressive images.
- Score: 10.899693396348171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement strives to improve the contrast, adjust the
visibility, and restore the distortion in color and texture. Existing methods
usually pay more attention to improving the visibility and contrast via
increasing the lightness of low-light images, while disregarding the
significance of color and texture restoration for high-quality images. Against
above issue, we propose a novel luminance and chrominance dual branch network,
termed LCDBNet, for low-light image enhancement, which divides low-light image
enhancement into two sub-tasks, e.g., luminance adjustment and chrominance
restoration. Specifically, LCDBNet is composed of two branches, namely
luminance adjustment network (LAN) and chrominance restoration network (CRN).
LAN takes responsibility for learning brightness-aware features leveraging
long-range dependency and local attention correlation. While CRN concentrates
on learning detail-sensitive features via multi-level wavelet decomposition.
Finally, a fusion network is designed to blend their learned features to
produce visually impressive images. Extensive experiments conducted on seven
benchmark datasets validate the effectiveness of our proposed LCDBNet, and the
results manifest that LCDBNet achieves superior performance in terms of
multiple reference/non-reference quality evaluators compared to other
state-of-the-art competitors. Our code and pretrained model will be available.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - CDAN: Convolutional dense attention-guided network for low-light image enhancement [2.2530496464901106]
Low-light images pose challenges of diminished clarity, muted colors, and reduced details.
This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images.
CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections.
arXiv Detail & Related papers (2023-08-24T16:22:05Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - TSN-CA: A Two-Stage Network with Channel Attention for Low-Light Image
Enhancement [11.738203047278848]
We propose a Two-Stage Network with Channel Attention (denoted as TSN-CA) to enhance the brightness of the low-light image.
We conduct extensive experiments to demonstrate that our method achieves excellent effect on brightness enhancement as well as denoising, details preservation and halo artifacts elimination.
arXiv Detail & Related papers (2021-10-06T03:20:18Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.