Learning Physics-Informed Color-Aware Transforms for Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2504.11896v1
- Date: Wed, 16 Apr 2025 09:23:38 GMT
- Title: Learning Physics-Informed Color-Aware Transforms for Low-Light Image Enhancement
- Authors: Xingxing Yang, Jie Chen, Zaifeng Yang,
- Abstract summary: We introduce a novel approach to low-light image enhancement based on decomposed physics-informed priors.<n>Existing methods that directly map low-light to normal-light images in the sRGB color space suffer from inconsistent color predictions.<n>Our proposed PiCat framework demonstrates superior performance compared to state-of-the-art methods across five benchmark datasets.
- Score: 5.8550460201927725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image decomposition offers deep insights into the imaging factors of visual data and significantly enhances various advanced computer vision tasks. In this work, we introduce a novel approach to low-light image enhancement based on decomposed physics-informed priors. Existing methods that directly map low-light to normal-light images in the sRGB color space suffer from inconsistent color predictions and high sensitivity to spectral power distribution (SPD) variations, resulting in unstable performance under diverse lighting conditions. To address these challenges, we introduce a Physics-informed Color-aware Transform (PiCat), a learning-based framework that converts low-light images from the sRGB color space into deep illumination-invariant descriptors via our proposed Color-aware Transform (CAT). This transformation enables robust handling of complex lighting and SPD variations. Complementing this, we propose the Content-Noise Decomposition Network (CNDN), which refines the descriptor distributions to better align with well-lit conditions by mitigating noise and other distortions, thereby effectively restoring content representations to low-light images. The CAT and the CNDN collectively act as a physical prior, guiding the transformation process from low-light to normal-light domains. Our proposed PiCat framework demonstrates superior performance compared to state-of-the-art methods across five benchmark datasets.
Related papers
- LTCF-Net: A Transformer-Enhanced Dual-Channel Fourier Framework for Low-Light Image Restoration [1.049712834719005]
We introduce LTCF-Net, a novel network architecture designed for enhancing low-light images.
Our approach utilizes two color spaces - LAB and YUV - to efficiently separate and process color information.
Our model incorporates the Transformer architecture to comprehensively understand image content.
arXiv Detail & Related papers (2024-11-24T07:21:17Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Revealing Shadows: Low-Light Image Enhancement Using Self-Calibrated
Illumination [4.913568097686369]
Self-Calibrated Illumination (SCI) is a strategy initially developed for RGB images.
We employ the SCI method to intensify and clarify details that are typically lost in low-light conditions.
This method of selective illumination enhancement leaves the color information intact, thus preserving the color integrity of the image.
arXiv Detail & Related papers (2023-12-23T08:49:19Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Brighten-and-Colorize: A Decoupled Network for Customized Low-Light
Image Enhancement [22.097267755811192]
Low-Light Image Enhancement (LLIE) aims to improve the perceptual quality of an image captured in low-light conditions.
Recent advances in this area mainly focus on the refinement of the lightness, while ignoring the role of chrominance.
In this work, a brighten-and-colorize'' network (called BCNet) is proposed to address the above issues.
arXiv Detail & Related papers (2023-08-06T06:04:16Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.