Brighten-and-Colorize: A Decoupled Network for Customized Low-Light
Image Enhancement
- URL: http://arxiv.org/abs/2308.03029v1
- Date: Sun, 6 Aug 2023 06:04:16 GMT
- Title: Brighten-and-Colorize: A Decoupled Network for Customized Low-Light
Image Enhancement
- Authors: Chenxi Wang, Zhi Jin
- Abstract summary: Low-Light Image Enhancement (LLIE) aims to improve the perceptual quality of an image captured in low-light conditions.
Recent advances in this area mainly focus on the refinement of the lightness, while ignoring the role of chrominance.
In this work, a brighten-and-colorize'' network (called BCNet) is proposed to address the above issues.
- Score: 22.097267755811192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-Light Image Enhancement (LLIE) aims to improve the perceptual quality of
an image captured in low-light conditions. Generally, a low-light image can be
divided into lightness and chrominance components. Recent advances in this area
mainly focus on the refinement of the lightness, while ignoring the role of
chrominance. It easily leads to chromatic aberration and, to some extent,
limits the diverse applications of chrominance in customized LLIE. In this
work, a ``brighten-and-colorize'' network (called BCNet), which introduces
image colorization to LLIE, is proposed to address the above issues. BCNet can
accomplish LLIE with accurate color and simultaneously enables customized
enhancement with varying saturations and color styles based on user
preferences. Specifically, BCNet regards LLIE as a multi-task learning problem:
brightening and colorization. The brightening sub-task aligns with other
conventional LLIE methods to get a well-lit lightness. The colorization
sub-task is accomplished by regarding the chrominance of the low-light image as
color guidance like the user-guide image colorization. Upon completion of model
training, the color guidance (i.e., input low-light chrominance) can be simply
manipulated by users to acquire customized results. This customized process is
optional and, due to its decoupled nature, does not compromise the structural
and detailed information of lightness. Extensive experiments on the commonly
used LLIE datasets show that the proposed method achieves both State-Of-The-Art
(SOTA) performance and user-friendly customization.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Division Gets Better: Learning Brightness-Aware and Detail-Sensitive
Representations for Low-Light Image Enhancement [10.899693396348171]
LCDBNet is composed of two branches, namely luminance adjustment network (LAN) and chrominance restoration network (CRN)
LAN takes responsibility for learning brightness-aware features leveraging long-range dependency and local attention correlation.
CRN concentrates on learning detail-sensitive features via multi-level wavelet decomposition.
Finally, a fusion network is designed to blend their learned features to produce visually impressive images.
arXiv Detail & Related papers (2023-07-18T09:52:48Z) - Learning Semantic-Aware Knowledge Guidance for Low-Light Image
Enhancement [69.47143451986067]
Low-light image enhancement (LLIE) investigates how to improve illumination and produce normal-light images.
The majority of existing methods improve low-light images via a global and uniform manner, without taking into account the semantic information of different regions.
We propose a novel semantic-aware knowledge-guided framework that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model.
arXiv Detail & Related papers (2023-04-14T10:22:28Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized
Enhancer for Low-Light Images [67.14410374622699]
We propose an intelligible unsupervised personalized enhancer (iUPEnhancer) for low-light images.
The proposed iUP-Enhancer is trained with the guidance of these correlations and the corresponding unsupervised loss functions.
Experiments demonstrate that the proposed algorithm produces competitive qualitative and quantitative results.
arXiv Detail & Related papers (2022-07-15T07:16:10Z) - Low-light Image Enhancement via Breaking Down the Darkness [8.707025631892202]
This paper presents a novel framework inspired by the divide-and-rule principle.
We propose to convert an image from the RGB space into a luminance-chrominance one.
An adjustable noise suppression network is designed to eliminate noise in the brightened luminance.
The enhanced luminance further serves as guidance for the chrominance mapper to generate realistic colors.
arXiv Detail & Related papers (2021-11-30T16:50:59Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Shed Various Lights on a Low-Light Image: Multi-Level Enhancement Guided
by Arbitrary References [17.59529931863947]
This paper proposes a neural network for multi-level low-light image enhancement.
Inspired by style transfer, our method decomposes an image into two low-coupling feature components in the latent space.
In such a way, the network learns to extract scene-invariant and brightness-specific information from a set of image pairs.
arXiv Detail & Related papers (2021-01-04T07:38:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.