Learning a Single Convolutional Layer Model for Low Light Image
Enhancement
- URL: http://arxiv.org/abs/2305.14039v1
- Date: Tue, 23 May 2023 13:12:00 GMT
- Title: Learning a Single Convolutional Layer Model for Low Light Image
Enhancement
- Authors: Yuantong Zhang, Baoxin Teng, Daiqin Yang, Zhenzhong Chen, Haichuan Ma,
Gang Li, Wenpeng Ding
- Abstract summary: Low-light image enhancement (LLIE) aims to improve the illuminance of images due to insufficient light exposure.
A single convolutional layer model (SCLM) is proposed that provides global low-light enhancement as the coarsely enhanced results.
Experimental results demonstrate that the proposed method performs favorably against the state-of-the-art LLIE methods in both objective metrics and subjective visual effects.
- Score: 43.411846299085575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement (LLIE) aims to improve the illuminance of images
due to insufficient light exposure. Recently, various lightweight
learning-based LLIE methods have been proposed to handle the challenges of
unfavorable prevailing low contrast, low brightness, etc. In this paper, we
have streamlined the architecture of the network to the utmost degree. By
utilizing the effective structural re-parameterization technique, a single
convolutional layer model (SCLM) is proposed that provides global low-light
enhancement as the coarsely enhanced results. In addition, we introduce a local
adaptation module that learns a set of shared parameters to accomplish local
illumination correction to address the issue of varied exposure levels in
different image regions. Experimental results demonstrate that the proposed
method performs favorably against the state-of-the-art LLIE methods in both
objective metrics and subjective visual effects. Additionally, our method has
fewer parameters and lower inference complexity compared to other
learning-based schemes.
Related papers
- A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Visibility Enhancement for Low-light Hazy Scenarios [18.605784907840473]
Low-light hazy scenes commonly appear at dusk and early morning.
We propose a novel method to enhance visibility for low-light hazy scenarios.
The framework is designed for enhancing visibility of the input image via fully utilizing the clues from different sub-tasks.
The simulation is designed for generating the dataset with ground-truths by the proposed low-light hazy imaging model.
arXiv Detail & Related papers (2023-08-01T15:07:38Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - Learning with Nested Scene Modeling and Cooperative Architecture Search
for Low-Light Vision [95.45256938467237]
Images captured from low-light scenes often suffer from severe degradations.
Deep learning methods have been proposed to enhance the visual quality of low-light images.
It is still challenging to extend these enhancement techniques to handle other Low-Light Vision applications.
arXiv Detail & Related papers (2021-12-09T06:08:31Z) - Improving Aerial Instance Segmentation in the Dark with Self-Supervised
Low Light Enhancement [6.500738558466833]
Low light conditions in aerial images adversely affect the performance of vision based applications.
We propose a new method that is capable of enhancing the low light image in a self-supervised fashion.
We also propose the generation of a new low light aerial dataset using GANs.
arXiv Detail & Related papers (2021-02-10T12:24:40Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.