Invertible Network for Unpaired Low-light Image Enhancement
- URL: http://arxiv.org/abs/2112.13107v1
- Date: Fri, 24 Dec 2021 17:00:54 GMT
- Title: Invertible Network for Unpaired Low-light Image Enhancement
- Authors: Jize Zhang, Haolin Wang, Xiaohe Wu, Wangmeng Zuo
- Abstract summary: We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
- Score: 78.33382003460903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing unpaired low-light image enhancement approaches prefer to employ the
two-way GAN framework, in which two CNN generators are deployed for enhancement
and degradation separately. However, such data-driven models ignore the
inherent characteristics of transformation between the low and normal light
images, leading to unstable training and artifacts. Here, we propose to
leverage the invertible network to enhance low-light image in forward process
and degrade the normal-light one inversely with unpaired learning. The
generated and real images are then fed into discriminators for adversarial
learning. In addition to the adversarial loss, we design various loss functions
to ensure the stability of training and preserve more image details.
Particularly, a reversibility loss is introduced to alleviate the over-exposure
problem. Moreover, we present a progressive self-guided enhancement process for
low-light images and achieve favorable performance against the SOTAs.
Related papers
- Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Zero-Shot Enhancement of Low-Light Image Based on Retinex Decomposition [4.175396687130961]
We propose a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet.
Our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets.
arXiv Detail & Related papers (2023-11-06T09:57:48Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Unsupervised Low Light Image Enhancement Using SNR-Aware Swin
Transformer [0.0]
Low-light image enhancement aims at improving brightness and contrast, and reducing noise that corrupts the visual quality.
We propose a dual-branch network based on Swin Transformer, guided by a signal-to-noise ratio prior map.
arXiv Detail & Related papers (2023-06-03T11:07:56Z) - INFWIDE: Image and Feature Space Wiener Deconvolution Network for
Non-blind Image Deblurring in Low-Light Conditions [32.35378513394865]
We propose a novel non-blind deblurring method dubbed image and feature space Wiener deconvolution network (INFWIDE)
INFWIDE removes noise and hallucinates saturated regions in the image space and suppresses ringing artifacts in the feature space.
Experiments on synthetic data and real data demonstrate the superior performance of the proposed approach.
arXiv Detail & Related papers (2022-07-17T15:22:31Z) - ReLLIE: Deep Reinforcement Learning for Customized Low-Light Image
Enhancement [21.680891925479195]
Low-light image enhancement (LLIE) is a pervasive yet challenging problem.
This paper presents a novel deep reinforcement learning based method, dubbed ReLLIE, for customized low-light enhancement.
arXiv Detail & Related papers (2021-07-13T03:36:30Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.