Joint Correcting and Refinement for Balanced Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2309.16128v2
- Date: Thu, 19 Oct 2023 08:51:23 GMT
- Title: Joint Correcting and Refinement for Balanced Low-Light Image Enhancement
- Authors: Nana Yu, Hong Shi and Yahong Han
- Abstract summary: A novel structure is proposed which can balance brightness, color, and illumination more effectively.
Joint Correcting and Refinement Network (JCRNet) mainly consists of three stages to balance brightness, color, and illumination of enhancement.
- Score: 26.399356992450763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement tasks demand an appropriate balance among
brightness, color, and illumination. While existing methods often focus on one
aspect of the image without considering how to pay attention to this balance,
which will cause problems of color distortion and overexposure etc. This
seriously affects both human visual perception and the performance of
high-level visual models. In this work, a novel synergistic structure is
proposed which can balance brightness, color, and illumination more
effectively. Specifically, the proposed method, so-called Joint Correcting and
Refinement Network (JCRNet), which mainly consists of three stages to balance
brightness, color, and illumination of enhancement. Stage 1: we utilize a basic
encoder-decoder and local supervision mechanism to extract local information
and more comprehensive details for enhancement. Stage 2: cross-stage feature
transmission and spatial feature transformation further facilitate color
correction and feature refinement. Stage 3: we employ a dynamic illumination
adjustment approach to embed residuals between predicted and ground truth
images into the model, adaptively adjusting illumination balance. Extensive
experiments demonstrate that the proposed method exhibits comprehensive
performance advantages over 21 state-of-the-art methods on 9 benchmark
datasets. Furthermore, a more persuasive experiment has been conducted to
validate our approach the effectiveness in downstream visual tasks (e.g.,
saliency detection). Compared to several enhancement models, the proposed
method effectively improves the segmentation results and quantitative metrics
of saliency detection. The source code will be available at
https://github.com/woshiyll/JCRNet.
Related papers
- Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition [4.175396687130961]
We propose a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet.
Our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets.
arXiv Detail & Related papers (2023-11-06T09:57:48Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Learning to Adapt to Light [14.919947487248653]
We propose a biologically inspired method to handle light-related image-enhancement tasks with a unified network (called LA-Net)
A new module is built inspired by biological visual adaptation to achieve unified light adaptation in the low-frequency pathway.
Experiments on three tasks -- low-light enhancement, exposure correction, and tone mapping -- demonstrate that the proposed method almost obtains state-of-the-art performance.
arXiv Detail & Related papers (2022-02-16T14:36:25Z) - Decoupled Low-light Image Enhancement [21.111831640136835]
We propose to decouple the enhancement model into two sequential stages.
The first stage focuses on improving the scene visibility based on a pixel-wise non-linear mapping.
The second stage focuses on improving the appearance fidelity by suppressing the rest degeneration factors.
arXiv Detail & Related papers (2021-11-29T11:15:38Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.