DiffLLE: Diffusion-guided Domain Calibration for Unsupervised Low-light
Image Enhancement
- URL: http://arxiv.org/abs/2308.09279v1
- Date: Fri, 18 Aug 2023 03:40:40 GMT
- Title: DiffLLE: Diffusion-guided Domain Calibration for Unsupervised Low-light
Image Enhancement
- Authors: Shuzhou Yang and Xuanyu Zhang and Yinhuai Wang and Jiwen Yu and Yuhan
Wang and Jian Zhang
- Abstract summary: Existing unsupervised low-light image enhancement methods lack enough effectiveness and generalization in practical applications.
We develop Diffusion-based domain calibration to realize more robust and effective unsupervised Low-Light Enhancement, called DiffLLE.
Our approach even outperforms some supervised methods by using only a simple unsupervised baseline.
- Score: 21.356254176992937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing unsupervised low-light image enhancement methods lack enough
effectiveness and generalization in practical applications. We suppose this is
because of the absence of explicit supervision and the inherent gap between
real-world scenarios and the training data domain. In this paper, we develop
Diffusion-based domain calibration to realize more robust and effective
unsupervised Low-Light Enhancement, called DiffLLE. Since the diffusion model
performs impressive denoising capability and has been trained on massive clean
images, we adopt it to bridge the gap between the real low-light domain and
training degradation domain, while providing efficient priors of real-world
content for unsupervised models. Specifically, we adopt a naive unsupervised
enhancement algorithm to realize preliminary restoration and design two
zero-shot plug-and-play modules based on diffusion model to improve
generalization and effectiveness. The Diffusion-guided Degradation Calibration
(DDC) module narrows the gap between real-world and training low-light
degradation through diffusion-based domain calibration and a lightness
enhancement curve, which makes the enhancement model perform robustly even in
sophisticated wild degradation. Due to the limited enhancement effect of the
unsupervised model, we further develop the Fine-grained Target domain
Distillation (FTD) module to find a more visual-friendly solution space. It
exploits the priors of the pre-trained diffusion model to generate
pseudo-references, which shrinks the preliminary restored results from a coarse
normal-light domain to a finer high-quality clean field, addressing the lack of
strong explicit supervision for unsupervised methods. Benefiting from these,
our approach even outperforms some supervised methods by using only a simple
unsupervised baseline. Extensive experiments demonstrate the superior
effectiveness of the proposed DiffLLE.
Related papers
- Efficient Diffusion as Low Light Enhancer [63.789138528062225]
Reflectance-Aware Trajectory Refinement (RATR) is a simple yet effective module to refine the teacher trajectory using the reflectance component of images.
textbfReflectance-aware textbfDiffusion with textbfDistilled textbfTrajectory (textbfReDDiT) is an efficient and flexible distillation framework tailored for Low-Light Image Enhancement (LLIE)
arXiv Detail & Related papers (2024-10-16T08:07:18Z) - AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement [37.274077278901494]
We propose the Attribute Guidance Diffusion framework (AGLLDiff) for effective real-world LIE.
AGLLDiff shifts the paradigm and models the desired attributes, such as image exposure, structure and color of normal-light images.
Our approach outperforms the current leading unsupervised LIE methods across benchmarks in terms of distortion-based and perceptual-based metrics.
arXiv Detail & Related papers (2024-07-20T15:17:48Z) - Zero-LED: Zero-Reference Lighting Estimation Diffusion Model for Low-Light Image Enhancement [2.9873893715462185]
We propose a novel zero-reference lighting estimation diffusion model for low-light image enhancement called Zero-LED.
It utilizes the stable convergence ability of diffusion models to bridge the gap between low-light domains and real normal-light domains.
It successfully alleviates the dependence on pairwise training data via zero-reference learning.
arXiv Detail & Related papers (2024-03-05T11:39:17Z) - Global Structure-Aware Diffusion Process for Low-Light Image Enhancement [64.69154776202694]
This paper studies a diffusion-based framework to address the low-light image enhancement problem.
We advocate for the regularization of its inherent ODE-trajectory.
Experimental evaluations reveal that the proposed framework attains distinguished performance in low-light enhancement.
arXiv Detail & Related papers (2023-10-26T17:01:52Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly
Detection [89.49600182243306]
We reformulate the reconstruction process using a diffusion model into a noise-to-norm paradigm.
We propose a rapid one-step denoising paradigm, significantly faster than the traditional iterative denoising in diffusion models.
The segmentation sub-network predicts pixel-level anomaly scores using the input image and its anomaly-free restoration.
arXiv Detail & Related papers (2023-03-15T16:14:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.