Dual Degradation-Inspired Deep Unfolding Network for Low-Light Image
Enhancement
- URL: http://arxiv.org/abs/2308.02776v1
- Date: Sat, 5 Aug 2023 03:07:11 GMT
- Title: Dual Degradation-Inspired Deep Unfolding Network for Low-Light Image
Enhancement
- Authors: Huake Wang, Xingsong Hou, Xiaoyang Yan
- Abstract summary: We propose a Dual degrAdation-inSpired deep Unfolding network, termed DASUNet, for low-light image enhancement.
It learns two distinct image priors via considering degradation specificity between luminance and chrominance spaces.
Our source code and pretrained model will be publicly available.
- Score: 3.4929041108486185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although low-light image enhancement has achieved great stride based on deep
enhancement models, most of them mainly stress on enhancement performance via
an elaborated black-box network and rarely explore the physical significance of
enhancement models. Towards this issue, we propose a Dual degrAdation-inSpired
deep Unfolding network, termed DASUNet, for low-light image enhancement.
Specifically, we construct a dual degradation model (DDM) to explicitly
simulate the deterioration mechanism of low-light images. It learns two
distinct image priors via considering degradation specificity between luminance
and chrominance spaces. To make the proposed scheme tractable, we design an
alternating optimization solution to solve the proposed DDM. Further, the
designed solution is unfolded into a specified deep network, imitating the
iteration updating rules, to form DASUNet. Local and long-range information are
obtained by prior modeling module (PMM), inheriting the advantages of
convolution and Transformer, to enhance the representation capability of dual
degradation priors. Additionally, a space aggregation module (SAM) is presented
to boost the interaction of two degradation models. Extensive experiments on
multiple popular low-light image datasets validate the effectiveness of DASUNet
compared to canonical state-of-the-art low-light image enhancement methods. Our
source code and pretrained model will be publicly available.
Related papers
- Enhanced Control for Diffusion Bridge in Image Restoration [4.480905492503335]
A special type of diffusion bridge model has achieved more advanced results in image restoration.
This paper introduces the ECDB model enhancing the control of the diffusion bridge with low-quality images as conditions.
Experimental results prove that the ECDB model has achieved state-of-the-art results in many image restoration tasks.
arXiv Detail & Related papers (2024-08-29T07:09:33Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Zero-LED: Zero-Reference Lighting Estimation Diffusion Model for Low-Light Image Enhancement [2.9873893715462185]
We propose a novel zero-reference lighting estimation diffusion model for low-light image enhancement called Zero-LED.
It utilizes the stable convergence ability of diffusion models to bridge the gap between low-light domains and real normal-light domains.
It successfully alleviates the dependence on pairwise training data via zero-reference learning.
arXiv Detail & Related papers (2024-03-05T11:39:17Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Latent Diffusion Prior Enhanced Deep Unfolding for Snapshot Spectral Compressive Imaging [17.511583657111792]
Snapshot spectral imaging reconstruction aims to reconstruct three-dimensional spatial-spectral images from a single-shot two-dimensional compressed measurement.
We introduce a generative model, namely the latent diffusion model (LDM), to generate degradation-free prior to deep unfolding method.
arXiv Detail & Related papers (2023-11-24T04:55:20Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.