Perception-Inspired Color Space Design for Photo White Balance Editing
- URL: http://arxiv.org/abs/2512.09383v2
- Date: Thu, 11 Dec 2025 12:40:16 GMT
- Title: Perception-Inspired Color Space Design for Photo White Balance Editing
- Authors: Yang Cheng, Ziteng Cui, Shenghan Su, Lin Gu, Zenghui Zhang,
- Abstract summary: White balance (WB) is a key step in the image signal processor (ISP) pipeline.<n>We propose a novel framework for WB correction that leverages a perception-inspired Learnable HSI (LHSI) color space.
- Score: 20.114221919949603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: White balance (WB) is a key step in the image signal processor (ISP) pipeline that mitigates color casts caused by varying illumination and restores the scene's true colors. Currently, sRGB-based WB editing for post-ISP WB correction is widely used to address color constancy failures in the ISP pipeline when the original camera RAW is unavailable. However, additive color models (e.g., sRGB) are inherently limited by fixed nonlinear transformations and entangled color channels, which often impede their generalization to complex lighting conditions. To address these challenges, we propose a novel framework for WB correction that leverages a perception-inspired Learnable HSI (LHSI) color space. Built upon a cylindrical color model that naturally separates luminance from chromatic components, our framework further introduces dedicated parameters to enhance this disentanglement and learnable mapping to adaptively refine the flexibility. Moreover, a new Mamba-based network is introduced, which is tailored to the characteristics of the proposed LHSI color space. Experimental results on benchmark datasets demonstrate the superiority of our method, highlighting the potential of perception-inspired color space design in computational photography. The source code is available at https://github.com/YangCheng58/WB_Color_Space.
Related papers
- RAW-Flow: Advancing RGB-to-RAW Image Reconstruction with Deterministic Latent Flow Matching [55.03149221192589]
We introduce a novel framework named RAW-Flow to bridge the gap between RGB and RAW representations.<n>We also introduce a cross-scale context guidance module that injects hierarchical RGB features into the flow estimation process.<n> RAW-Flow outperforms state-of-the-art approaches both quantitatively and visually.
arXiv Detail & Related papers (2026-01-28T08:27:38Z) - HVI-CIDNet+: Beyond Extreme Darkness for Low-Light Image Enhancement [56.02740727422916]
Low-Light Image Enhancement (LLIE) aims to restore vivid content and details from corrupted low-light images.<n>Existing standard RGB (sRGB) color space-based LLIE methods often produce color bias and brightness artifacts.<n>We propose a new color space for LLIE, defined by the HV color map and learnable intensity.<n>HVI-CIDNet+ is built upon the HVI color space to restore damaged content and mitigate color distortion in extremely dark regions.
arXiv Detail & Related papers (2025-07-09T13:03:34Z) - Learning Physics-Informed Color-Aware Transforms for Low-Light Image Enhancement [5.8550460201927725]
We introduce a novel approach to low-light image enhancement based on decomposed physics-informed priors.<n>Existing methods that directly map low-light to normal-light images in the sRGB color space suffer from inconsistent color predictions.<n>Our proposed PiCat framework demonstrates superior performance compared to state-of-the-art methods across five benchmark datasets.
arXiv Detail & Related papers (2025-04-16T09:23:38Z) - ISPDiffuser: Learning RAW-to-sRGB Mappings with Texture-Aware Diffusion Models and Histogram-Guided Color Consistency [32.05482995863444]
RAW-to-sRGB mapping aims to generate DSLR-quality sRGB images from raw data captured by smartphone sensors.<n>ISPDiffuser is a diffusion-based framework that separates the RAW-to-sRGB mapping into detail reconstruction in grayscale space.<n>ISPDiffuser outperforms state-of-the-art competitors both quantitatively and visually.
arXiv Detail & Related papers (2025-03-25T02:29:39Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Rethinking RGB Color Representation for Image Restoration Models [55.81013540537963]
We augment the representation to hold structural information of local neighborhoods at each pixel.
Substituting the underlying representation space for the per-pixel losses facilitates the training of image restoration models.
Our space consistently improves overall metrics by reconstructing both color and local structures.
arXiv Detail & Related papers (2024-02-05T06:38:39Z) - Enhancing RAW-to-sRGB with Decoupled Style Structure in Fourier Domain [27.1716081216131]
Current methods ignore the difference between cell phone RAW images and DSLR camera RGB images.
We present a novel Neural ISP framework, named FourierISP.
This approach breaks the image down into style and structure within the frequency domain, allowing for independent optimization.
arXiv Detail & Related papers (2024-01-04T09:18:31Z) - Brighten-and-Colorize: A Decoupled Network for Customized Low-Light
Image Enhancement [22.097267755811192]
Low-Light Image Enhancement (LLIE) aims to improve the perceptual quality of an image captured in low-light conditions.
Recent advances in this area mainly focus on the refinement of the lightness, while ignoring the role of chrominance.
In this work, a brighten-and-colorize'' network (called BCNet) is proposed to address the above issues.
arXiv Detail & Related papers (2023-08-06T06:04:16Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Deep White-Balance Editing [50.08927449718674]
Cameras capture sensor images that are rendered by their integrated signal processor (ISP) to a standard RGB (sRGB) color space encoding.
Recent work by [3] showed that sRGB images that were rendered with the incorrect white balance cannot be easily corrected due to the ISP's nonlinear rendering.
We propose to solve this problem with a deep neural network (DNN) architecture trained in an end-to-end manner to learn the correct white balance.
arXiv Detail & Related papers (2020-04-03T03:18:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.