HVI: A New Color Space for Low-light Image Enhancement
- URL: http://arxiv.org/abs/2502.20272v2
- Date: Fri, 28 Feb 2025 11:13:24 GMT
- Title: HVI: A New Color Space for Low-light Image Enhancement
- Authors: Qingsen Yan, Yixu Feng, Cheng Zhang, Guansong Pang, Kangbiao Shi, Peng Wu, Wei Dong, Jinqiu Sun, Yanning Zhang,
- Abstract summary: We propose a new color space for Low-Light Image Enhancement (LLIE) based on Horizontal/Vertical-Intensity (HVI)<n>HVI is defined by polarized HS maps and learnable intensity, while the latter compresses the low-light regions to remove the black artifacts.<n>To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is introduced.
- Score: 58.8280819306909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-Light Image Enhancement (LLIE) is a crucial computer vision task that aims to restore detailed visual information from corrupted low-light images. Many existing LLIE methods are based on standard RGB (sRGB) space, which often produce color bias and brightness artifacts due to inherent high color sensitivity in sRGB. While converting the images using Hue, Saturation and Value (HSV) color space helps resolve the brightness issue, it introduces significant red and black noise artifacts. To address this issue, we propose a new color space for LLIE, namely Horizontal/Vertical-Intensity (HVI), defined by polarized HS maps and learnable intensity. The former enforces small distances for red coordinates to remove the red artifacts, while the latter compresses the low-light regions to remove the black artifacts. To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is further introduced to learn accurate photometric mapping function under different lighting conditions in the HVI space. Comprehensive results from benchmark and ablation experiments show that the proposed HVI color space with CIDNet outperforms the state-of-the-art methods on 10 datasets. The code is available at https://github.com/Fediory/HVI-CIDNet.
Related papers
- You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - AGG-Net: Attention Guided Gated-convolutional Network for Depth Image
Completion [1.8820731605557168]
We propose a new model for depth image completion based on the Attention Guided Gated-convolutional Network (AGG-Net)
In the encoding stage, an Attention Guided Gated-Convolution (AG-GConv) module is proposed to realize the fusion of depth and color features at different scales.
In the decoding stage, an Attention Guided Skip Connection (AG-SC) module is presented to avoid introducing too many depth-irrelevant features to the reconstruction.
arXiv Detail & Related papers (2023-09-04T14:16:08Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep
Inconsistency Prior [6.162654963520402]
High-intensity noise in low-light images amplifies the effect of structure inconsistency between RGB-NIR images, which fails existing algorithms.
We propose a new RGB-NIR fusion algorithm called Dark Vision Net (DVN) with two technical novelties: Deep Structure and Deep Inconsistency Prior (DIP)
Based on the deep structures from both RGB and NIR domains, we introduce the DIP to leverage the structure inconsistency to guide the fusion of RGB-NIR.
arXiv Detail & Related papers (2023-03-13T03:31:29Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z) - Better Than Reference In Low Light Image Enhancement: Conditional
Re-Enhancement Networks [7.403383360312335]
We propose a low light image enhancement method that can combined with supervised learning and previous HSV or Retinex model based image enhancement methods.
A data-driven conditional re-enhancement network (denoted as CRENet) is proposed.
The network takes low light images as input and the enhanced V channel as condition, then it can re-enhance the contrast and brightness of the low light image.
arXiv Detail & Related papers (2020-08-26T08:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.