LEUGAN:Low-Light Image Enhancement by Unsupervised Generative
Attentional Networks
- URL: http://arxiv.org/abs/2012.13322v1
- Date: Thu, 24 Dec 2020 16:49:19 GMT
- Title: LEUGAN:Low-Light Image Enhancement by Unsupervised Generative
Attentional Networks
- Authors: Yangyang Qu, Chao liu, Yongsheng Ou
- Abstract summary: We propose an unsupervised generation network with attention-guidance to handle the low-light image enhancement task.
Specifically, our network contains two parts: an edge auxiliary module that restores sharper edges and an attention guidance module that recovers more realistic colors.
Experiments validate that our proposed algorithm performs favorably against state-of-the-art methods.
- Score: 4.584570928928926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Restoring images from low-light data is a challenging problem. Most existing
deep-network based algorithms are designed to be trained with pairwise images.
Due to the lack of real-world datasets, they usually perform poorly when
generalized in practice in terms of loss of image edge and color information.
In this paper, we propose an unsupervised generation network with
attention-guidance to handle the low-light image enhancement task.
Specifically, our network contains two parts: an edge auxiliary module that
restores sharper edges and an attention guidance module that recovers more
realistic colors. Moreover, we propose a novel loss function to make the edges
of the generated images more visible. Experiments validate that our proposed
algorithm performs favorably against state-of-the-art methods, especially for
real-world images in terms of image clarity and noise control.
Related papers
- You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions [14.63586364951471]
We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
arXiv Detail & Related papers (2023-04-06T10:05:54Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image
Decomposition [17.008724191799313]
Intrinsic image decomposition is the process of recovering the image formation components (reflectance and shading) from an image.
In this paper, an end-to-end edge-driven hybrid CNN approach is proposed for intrinsic image decomposition.
arXiv Detail & Related papers (2022-03-30T20:46:15Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network [7.755223662467257]
We propose a novel Real-low to Real-normal Network for low-light image enhancement, dubbed R2RNet.
Unlike most previous methods trained on synthetic images, we collect the first Large-Scale Real-World paired low/normal-light images dataset.
Our method can properly improve the contrast and suppress noise simultaneously.
arXiv Detail & Related papers (2021-06-28T09:33:13Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z) - Burst Denoising of Dark Images [19.85860245798819]
We propose a deep learning framework for obtaining clean and colorful RGB images from extremely dark raw images.
The backbone of our framework is a novel coarse-to-fine network architecture that generates high-quality outputs in a progressive manner.
Our experiments demonstrate that the proposed approach leads to perceptually more pleasing results than state-of-the-art methods.
arXiv Detail & Related papers (2020-03-17T17:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.