Color Shift Estimation-and-Correction for Image Enhancement
- URL: http://arxiv.org/abs/2405.17725v2
- Date: Wed, 29 May 2024 10:03:06 GMT
- Title: Color Shift Estimation-and-Correction for Image Enhancement
- Authors: Yiyu Li, Ke Xu, Gerhard Petrus Hancke, Rynson W. H. Lau,
- Abstract summary: Images captured under sub-optimal illumination conditions may contain both over- and under-exposures.
Current approaches mainly focus on adjusting image brightness, which may exacerbate the color tone distortion in under-exposed areas.
We propose a novel method to enhance images with both over- and under-exposures by learning to estimate and correct such color shifts.
- Score: 37.52492067462557
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Images captured under sub-optimal illumination conditions may contain both over- and under-exposures. Current approaches mainly focus on adjusting image brightness, which may exacerbate the color tone distortion in under-exposed areas and fail to restore accurate colors in over-exposed regions. We observe that over- and under-exposed regions display opposite color tone distribution shifts with respect to each other, which may not be easily normalized in joint modeling as they usually do not have ``normal-exposed'' regions/pixels as reference. In this paper, we propose a novel method to enhance images with both over- and under-exposures by learning to estimate and correct such color shifts. Specifically, we first derive the color feature maps of the brightened and darkened versions of the input image via a UNet-based network, followed by a pseudo-normal feature generator to produce pseudo-normal color feature maps. We then propose a novel COlor Shift Estimation (COSE) module to estimate the color shifts between the derived brightened (or darkened) color feature maps and the pseudo-normal color feature maps. The COSE module corrects the estimated color shifts of the over- and under-exposed regions separately. We further propose a novel COlor MOdulation (COMO) module to modulate the separately corrected colors in the over- and under-exposed regions to produce the enhanced image. Comprehensive experiments show that our method outperforms existing approaches. Project webpage: https://github.com/yiyulics/CSEC.
Related papers
- A Nerf-Based Color Consistency Method for Remote Sensing Images [0.5735035463793009]
We propose a NeRF-based method of color consistency for multi-view images, which weaves image features together using implicit expressions, and then re-illuminates feature space to generate a fusion image with a new perspective.
Experimental results show that the synthesize image generated by our method has excellent visual effect and smooth color transition at the edges.
arXiv Detail & Related papers (2024-11-08T13:26:07Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - ITRE: Low-light Image Enhancement Based on Illumination Transmission
Ratio Estimation [10.26197196078661]
Noise, artifacts, and over-exposure are significant challenges in the field of low-light image enhancement.
We propose a novel Retinex-based method, called ITRE, which suppresses noise and artifacts from the origin of the model.
Extensive experiments demonstrate the effectiveness of our approach in suppressing noise, preventing artifacts, and controlling over-exposure level simultaneously.
arXiv Detail & Related papers (2023-10-08T13:22:20Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - Dequantization and Color Transfer with Diffusion Models [5.228564799458042]
quantized images offer easy abstraction for patch-based edits and palette transfer.
We show that our model can generate natural images that respect the color palette the user asked for.
Our method can be usefully extended to another practical edit: recoloring patches of an image while respecting the source texture.
arXiv Detail & Related papers (2023-07-06T00:07:32Z) - Low-light Image Enhancement via Breaking Down the Darkness [8.707025631892202]
This paper presents a novel framework inspired by the divide-and-rule principle.
We propose to convert an image from the RGB space into a luminance-chrominance one.
An adjustable noise suppression network is designed to eliminate noise in the brightened luminance.
The enhanced luminance further serves as guidance for the chrominance mapper to generate realistic colors.
arXiv Detail & Related papers (2021-11-30T16:50:59Z) - Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with
Conditional StyleGAN [88.62422914645066]
We present an algorithm for re-rendering a person from a single image under arbitrary poses.
Existing methods often have difficulties in hallucinating occluded contents photo-realistically while preserving the identity and fine details in the source image.
We show that our method compares favorably against the state-of-the-art algorithms in both quantitative evaluation and visual comparison.
arXiv Detail & Related papers (2021-09-13T17:59:33Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - Guided Colorization Using Mono-Color Image Pairs [6.729108277517129]
monochrome images usually have better signal-to-noise ratio (SNR) and richer textures due to its higher quantum efficiency.
We propose a mono-color image enhancement algorithm that colorizes the monochrome image with the color one.
Experimental results show that, our algorithm can efficiently restore color images with higher SNR and richer details from the mono-color image pairs.
arXiv Detail & Related papers (2021-08-17T07:00:28Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.