You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement
- URL: http://arxiv.org/abs/2402.05809v3
- Date: Mon, 17 Jun 2024 20:43:01 GMT
- Title: You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement
- Authors: Qingsen Yan, Yixu Feng, Cheng Zhang, Pei Wang, Peng Wu, Wei Dong, Jinqiu Sun, Yanning Zhang,
- Abstract summary: Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
- Score: 50.37253008333166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images. Most existing methods learn the mapping function between low/normal-light images by Deep Neural Networks (DNNs) on sRGB and HSV color space. Nevertheless, enhancement involves amplifying image signals, and applying these color spaces to low-light images with a low signal-to-noise ratio can introduce sensitivity and instability into the enhancement process. Consequently, this results in the presence of color artifacts and brightness artifacts in the enhanced images. To alleviate this problem, we propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI). It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters. Further, we design a novel Color and Intensity Decoupling Network (CIDNet) with two branches dedicated to processing the decoupled image brightness and color in the HVI space. Within CIDNet, we introduce the Lightweight Cross-Attention (LCA) module to facilitate interaction between image structure and content information in both branches, while also suppressing noise in low-light images. Finally, we conducted 22 quantitative and qualitative experiments to show that the proposed CIDNet outperforms the state-of-the-art methods on 11 datasets. The code is available at https://github.com/Fediory/HVI-CIDNet.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Training Neural Networks on RAW and HDR Images for Restoration Tasks [59.41340420564656]
In this work, we test approaches on three popular image restoration applications: denoising, deblurring, and single-image super-resolution.
Our results indicate that neural networks train significantly better on HDR and RAW images represented in display color spaces.
This small change to the training strategy can bring a very substantial gain in performance, up to 10-15 dB.
arXiv Detail & Related papers (2023-12-06T17:47:16Z) - Division Gets Better: Learning Brightness-Aware and Detail-Sensitive
Representations for Low-Light Image Enhancement [10.899693396348171]
LCDBNet is composed of two branches, namely luminance adjustment network (LAN) and chrominance restoration network (CRN)
LAN takes responsibility for learning brightness-aware features leveraging long-range dependency and local attention correlation.
CRN concentrates on learning detail-sensitive features via multi-level wavelet decomposition.
Finally, a fusion network is designed to blend their learned features to produce visually impressive images.
arXiv Detail & Related papers (2023-07-18T09:52:48Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - TSN-CA: A Two-Stage Network with Channel Attention for Low-Light Image
Enhancement [11.738203047278848]
We propose a Two-Stage Network with Channel Attention (denoted as TSN-CA) to enhance the brightness of the low-light image.
We conduct extensive experiments to demonstrate that our method achieves excellent effect on brightness enhancement as well as denoising, details preservation and halo artifacts elimination.
arXiv Detail & Related papers (2021-10-06T03:20:18Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Better Than Reference In Low Light Image Enhancement: Conditional
Re-Enhancement Networks [7.403383360312335]
We propose a low light image enhancement method that can combined with supervised learning and previous HSV or Retinex model based image enhancement methods.
A data-driven conditional re-enhancement network (denoted as CRENet) is proposed.
The network takes low light images as input and the enhanced V channel as condition, then it can re-enhance the contrast and brightness of the low light image.
arXiv Detail & Related papers (2020-08-26T08:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.