SCRNet: a Retinex Structure-based Low-light Enhancement Model Guided by
Spatial Consistency
- URL: http://arxiv.org/abs/2305.08053v1
- Date: Sun, 14 May 2023 03:32:19 GMT
- Title: SCRNet: a Retinex Structure-based Low-light Enhancement Model Guided by
Spatial Consistency
- Authors: Miao Zhang, Yiqing Shen and Shenghui Zhong
- Abstract summary: We present a novel low-light image enhancement model, termed Spatial Consistency Retinex Network (SCRNet)
Our proposed model incorporates three levels of consistency: channel level, semantic level, and texture level, inspired by the principle of spatial consistency.
Extensive evaluations on various low-light image datasets demonstrate that our proposed SCRNet outshines existing state-of-the-art methods.
- Score: 22.54951703413469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images captured under low-light conditions are often plagued by several
challenges, including diminished contrast, increased noise, loss of fine
details, and unnatural color reproduction. These factors can significantly
hinder the performance of computer vision tasks such as object detection and
image segmentation. As a result, improving the quality of low-light images is
of paramount importance for practical applications in the computer vision
domain.To effectively address these challenges, we present a novel low-light
image enhancement model, termed Spatial Consistency Retinex Network (SCRNet),
which leverages the Retinex-based structure and is guided by the principle of
spatial consistency.Specifically, our proposed model incorporates three levels
of consistency: channel level, semantic level, and texture level, inspired by
the principle of spatial consistency.These levels of consistency enable our
model to adaptively enhance image features, ensuring more accurate and visually
pleasing results.Extensive experimental evaluations on various low-light image
datasets demonstrate that our proposed SCRNet outshines existing
state-of-the-art methods, highlighting the potential of SCRNet as an effective
solution for enhancing low-light images.
Related papers
- DARK: Denoising, Amplification, Restoration Kit [0.7670170505111058]
This paper introduces a novel lightweight computational framework for enhancing images under low-light conditions.
Our model is designed to be lightweight, ensuring low computational demand and suitability for real-time applications on standard consumer hardware.
arXiv Detail & Related papers (2024-05-21T16:01:13Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition [4.175396687130961]
We propose a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet.
Our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets.
arXiv Detail & Related papers (2023-11-06T09:57:48Z) - CDAN: Convolutional dense attention-guided network for low-light image enhancement [2.2530496464901106]
Low-light images pose challenges of diminished clarity, muted colors, and reduced details.
This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images.
CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections.
arXiv Detail & Related papers (2023-08-24T16:22:05Z) - INFWIDE: Image and Feature Space Wiener Deconvolution Network for
Non-blind Image Deblurring in Low-Light Conditions [32.35378513394865]
We propose a novel non-blind deblurring method dubbed image and feature space Wiener deconvolution network (INFWIDE)
INFWIDE removes noise and hallucinates saturated regions in the image space and suppresses ringing artifacts in the feature space.
Experiments on synthetic data and real data demonstrate the superior performance of the proposed approach.
arXiv Detail & Related papers (2022-07-17T15:22:31Z) - Semi-supervised atmospheric component learning in low-light image
problem [0.0]
Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices.
This study presents a semi-supervised training method using no-reference image quality metrics for low-light image restoration.
arXiv Detail & Related papers (2022-04-15T17:06:33Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.