Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement
- URL: http://arxiv.org/abs/2207.00965v1
- Date: Sun, 3 Jul 2022 06:37:46 GMT
- Title: Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement
- Authors: Zhangkai Ni, Wenhan Yang, Hanli Wang, Shiqi Wang, Lin Ma, Sam Kwong
- Abstract summary: Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
- Score: 109.335317310485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Getting rid of the fundamental limitations in fitting to the paired training
data, recent unsupervised low-light enhancement methods excel in adjusting
illumination and contrast of images. However, for unsupervised low light
enhancement, the remaining noise suppression issue due to the lacking of
supervision of detailed signal largely impedes the wide deployment of these
methods in real-world applications. Herein, we propose a novel
Cycle-Interactive Generative Adversarial Network (CIGAN) for unsupervised
low-light image enhancement, which is capable of not only better transferring
illumination distributions between low/normal-light images but also
manipulating detailed signals between two domains, e.g.,
suppressing/synthesizing realistic noise in the cyclic enhancement/degradation
process. In particular, the proposed low-light guided transformation
feed-forwards the features of low-light images from the generator of
enhancement GAN (eGAN) into the generator of degradation GAN (dGAN). With the
learned information of real low-light images, dGAN can synthesize more
realistic diverse illumination and contrast in low-light images. Moreover, the
feature randomized perturbation module in dGAN learns to increase the feature
randomness to produce diverse feature distributions, persuading the synthesized
low-light images to contain realistic noise. Extensive experiments demonstrate
both the superiority of the proposed method and the effectiveness of each
module in CIGAN.
Related papers
- Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors [38.96909959677438]
Low-light image enhancement (LIE) aims at precisely and efficiently recovering an image degraded in poor illumination environments.
Recent advanced LIE techniques are using deep neural networks, which require lots of low-normal light image pairs, network parameters, and computational resources.
We devise a novel unsupervised LIE framework based on diffusion priors and lookup tables to achieve efficient low-light image recovery.
arXiv Detail & Related papers (2024-09-27T16:37:27Z) - KAN See In the Dark [2.9873893715462185]
Existing low-light image enhancement methods are difficult to fit the complex nonlinear relationship between normal and low-light images due to uneven illumination and noise effects.
The recently proposed Kolmogorov-Arnold networks (KANs) feature spline-based convolutional layers and learnable activation functions, which can effectively capture nonlinear dependencies.
In this paper, we design a KAN-Block based on KANs and innovatively apply it to low-light image enhancement. This method effectively alleviates the limitations of current methods constrained by linear network structures and lack of interpretability.
arXiv Detail & Related papers (2024-09-05T10:41:17Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field
Images [9.926231893220063]
Recent learning-based methods for low-light enhancement have their own disadvantages.
We propose an efficient Low-light Restoration Transformer (LRT) for LF images.
We show that our method can achieve superior performance on the restoration of extremely low-light and noisy LF images.
arXiv Detail & Related papers (2022-09-06T03:23:58Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - CERL: A Unified Optimization Framework for Light Enhancement with
Realistic Noise [81.47026986488638]
Low-light images captured in the real world are inevitably corrupted by sensor noise.
Existing light enhancement methods either overlook the important impact of real-world noise during enhancement, or treat noise removal as a separate pre- or post-processing step.
We present Coordinated Enhancement for Real-world Low-light Noisy Images (CERL), that seamlessly integrates light enhancement and noise suppression parts into a unified and physics-grounded framework.
arXiv Detail & Related papers (2021-08-01T15:31:15Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.