Low-Light Image Enhancement by Learning Contrastive Representations in
Spatial and Frequency Domains
- URL: http://arxiv.org/abs/2303.13412v1
- Date: Thu, 23 Mar 2023 16:32:49 GMT
- Title: Low-Light Image Enhancement by Learning Contrastive Representations in
Spatial and Frequency Domains
- Authors: Yi Huang, Xiaoguang Tu, Gui Fu, Tingting Liu, Bokai Liu, Ming Yang,
Ziliang Feng
- Abstract summary: We propose to incorporate the contrastive learning into an illumination correction network to learn abstract representations to distinguish various low-light conditions.
Considering that light conditions can change the frequency components of the images, the representations are learned and compared in both spatial and frequency domains.
The results show that the proposed method achieves better qualitative and quantitative results compared with other state-of-the-arts.
- Score: 8.741111756168916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images taken under low-light conditions tend to suffer from poor visibility,
which can decrease image quality and even reduce the performance of the
downstream tasks. It is hard for a CNN-based method to learn generalized
features that can recover normal images from the ones under various unknow
low-light conditions. In this paper, we propose to incorporate the contrastive
learning into an illumination correction network to learn abstract
representations to distinguish various low-light conditions in the
representation space, with the purpose of enhancing the generalizability of the
network. Considering that light conditions can change the frequency components
of the images, the representations are learned and compared in both spatial and
frequency domains to make full advantage of the contrastive learning. The
proposed method is evaluated on LOL and LOL-V2 datasets, the results show that
the proposed method achieves better qualitative and quantitative results
compared with other state-of-the-arts.
Related papers
- Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - SCRNet: a Retinex Structure-based Low-light Enhancement Model Guided by
Spatial Consistency [22.54951703413469]
We present a novel low-light image enhancement model, termed Spatial Consistency Retinex Network (SCRNet)
Our proposed model incorporates three levels of consistency: channel level, semantic level, and texture level, inspired by the principle of spatial consistency.
Extensive evaluations on various low-light image datasets demonstrate that our proposed SCRNet outshines existing state-of-the-art methods.
arXiv Detail & Related papers (2023-05-14T03:32:19Z) - Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions [14.63586364951471]
We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
arXiv Detail & Related papers (2023-04-06T10:05:54Z) - DEANet: Decomposition Enhancement and Adjustment Network for Low-Light
Image Enhancement [8.328470427768695]
This paper proposes a DEANet based on Retinex for low-light image enhancement.
It combines the frequency information and content information of the image into three sub-networks.
Our model has good robust results for all low-light images.
arXiv Detail & Related papers (2022-09-14T03:01:55Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Semi-supervised atmospheric component learning in low-light image
problem [0.0]
Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices.
This study presents a semi-supervised training method using no-reference image quality metrics for low-light image restoration.
arXiv Detail & Related papers (2022-04-15T17:06:33Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - ReLLIE: Deep Reinforcement Learning for Customized Low-Light Image
Enhancement [21.680891925479195]
Low-light image enhancement (LLIE) is a pervasive yet challenging problem.
This paper presents a novel deep reinforcement learning based method, dubbed ReLLIE, for customized low-light enhancement.
arXiv Detail & Related papers (2021-07-13T03:36:30Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.