HVI-CIDNet+: Beyond Extreme Darkness for Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2507.06814v1
- Date: Wed, 09 Jul 2025 13:03:34 GMT
- Title: HVI-CIDNet+: Beyond Extreme Darkness for Low-Light Image Enhancement
- Authors: Qingsen Yan, Kangbiao Shi, Yixu Feng, Tao Hu, Peng Wu, Guansong Pang, Yanning Zhang,
- Abstract summary: Low-Light Image Enhancement (LLIE) aims to restore vivid content and details from corrupted low-light images.<n>Existing standard RGB (sRGB) color space-based LLIE methods often produce color bias and brightness artifacts.<n>We propose a new color space for LLIE, defined by the HV color map and learnable intensity.<n>HVI-CIDNet+ is built upon the HVI color space to restore damaged content and mitigate color distortion in extremely dark regions.
- Score: 56.02740727422916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-Light Image Enhancement (LLIE) aims to restore vivid content and details from corrupted low-light images. However, existing standard RGB (sRGB) color space-based LLIE methods often produce color bias and brightness artifacts due to the inherent high color sensitivity. While Hue, Saturation, and Value (HSV) color space can decouple brightness and color, it introduces significant red and black noise artifacts. To address this problem, we propose a new color space for LLIE, namely Horizontal/Vertical-Intensity (HVI), defined by the HV color map and learnable intensity. The HV color map enforces small distances for the red coordinates to remove red noise artifacts, while the learnable intensity compresses the low-light regions to remove black noise artifacts. Additionally, we introduce the Color and Intensity Decoupling Network+ (HVI-CIDNet+), built upon the HVI color space, to restore damaged content and mitigate color distortion in extremely dark regions. Specifically, HVI-CIDNet+ leverages abundant contextual and degraded knowledge extracted from low-light images using pre-trained vision-language models, integrated via a novel Prior-guided Attention Block (PAB). Within the PAB, latent semantic priors can promote content restoration, while degraded representations guide precise color correction, both particularly in extremely dark regions through the meticulously designed cross-attention fusion mechanism. Furthermore, we construct a Region Refinement Block that employs convolution for information-rich regions and self-attention for information-scarce regions, ensuring accurate brightness adjustments. Comprehensive results from benchmark experiments demonstrate that the proposed HVI-CIDNet+ outperforms the state-of-the-art methods on 10 datasets.
Related papers
- HVI: A New Color Space for Low-light Image Enhancement [58.8280819306909]
We propose a new color space for Low-Light Image Enhancement (LLIE) based on Horizontal/Vertical-Intensity (HVI)<n>HVI is defined by polarized HS maps and learnable intensity, while the latter compresses the low-light regions to remove the black artifacts.<n>To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is introduced.
arXiv Detail & Related papers (2025-02-27T16:59:51Z) - Sagiri: Low Dynamic Range Image Enhancement with Generative Diffusion Prior [26.93312785388343]
High Dynamic Range scenery using 8-bit cameras often suffers from over-/underexposure, loss of fine details due to low bit-depth compression, skewed color distributions, and strong noise in dark areas.
Traditional LDR image enhancement methods primarily focus on color mapping, which enhances the visual representation by expanding the image's color range and adjusting the brightness.
We propose a novel two-stage approach to address the full scope of challenges in HDR imaging and surpass the limitations of current models.
arXiv Detail & Related papers (2024-06-13T17:58:40Z) - Color Shift Estimation-and-Correction for Image Enhancement [37.52492067462557]
Images captured under sub-optimal illumination conditions may contain both over- and under-exposures.
Current approaches mainly focus on adjusting image brightness, which may exacerbate the color tone distortion in under-exposed areas.
We propose a novel method to enhance images with both over- and under-exposures by learning to estimate and correct such color shifts.
arXiv Detail & Related papers (2024-05-28T00:45:35Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - AGG-Net: Attention Guided Gated-convolutional Network for Depth Image
Completion [1.8820731605557168]
We propose a new model for depth image completion based on the Attention Guided Gated-convolutional Network (AGG-Net)
In the encoding stage, an Attention Guided Gated-Convolution (AG-GConv) module is proposed to realize the fusion of depth and color features at different scales.
In the decoding stage, an Attention Guided Skip Connection (AG-SC) module is presented to avoid introducing too many depth-irrelevant features to the reconstruction.
arXiv Detail & Related papers (2023-09-04T14:16:08Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.