Global Structure-Aware Diffusion Process for Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2310.17577v2
- Date: Fri, 27 Oct 2023 08:26:49 GMT
- Title: Global Structure-Aware Diffusion Process for Low-Light Image Enhancement
- Authors: Jinhui Hou, Zhiyu Zhu, Junhui Hou, Hui Liu, Huanqiang Zeng, Hui Yuan
- Abstract summary: This paper studies a diffusion-based framework to address the low-light image enhancement problem.
We advocate for the regularization of its inherent ODE-trajectory.
Experimental evaluations reveal that the proposed framework attains distinguished performance in low-light enhancement.
- Score: 64.69154776202694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies a diffusion-based framework to address the low-light image
enhancement problem. To harness the capabilities of diffusion models, we delve
into this intricate process and advocate for the regularization of its inherent
ODE-trajectory. To be specific, inspired by the recent research that low
curvature ODE-trajectory results in a stable and effective diffusion process,
we formulate a curvature regularization term anchored in the intrinsic
non-local structures of image data, i.e., global structure-aware
regularization, which gradually facilitates the preservation of complicated
details and the augmentation of contrast during the diffusion process. This
incorporation mitigates the adverse effects of noise and artifacts resulting
from the diffusion process, leading to a more precise and flexible enhancement.
To additionally promote learning in challenging regions, we introduce an
uncertainty-guided regularization technique, which wisely relaxes constraints
on the most extreme regions of the image. Experimental evaluations reveal that
the proposed diffusion-based framework, complemented by rank-informed
regularization, attains distinguished performance in low-light enhancement. The
outcomes indicate substantial advancements in image quality, noise suppression,
and contrast amplification in comparison with state-of-the-art methods. We
believe this innovative approach will stimulate further exploration and
advancement in low-light image processing, with potential implications for
other applications of diffusion models. The code is publicly available at
https://github.com/jinnh/GSAD.
Related papers
- AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement [37.274077278901494]
We propose the Attribute Guidance Diffusion framework (AGLLDiff) for effective real-world LIE.
AGLLDiff shifts the paradigm and models the desired attributes, such as image exposure, structure and color of normal-light images.
Our approach outperforms the current leading unsupervised LIE methods across benchmarks in terms of distortion-based and perceptual-based metrics.
arXiv Detail & Related papers (2024-07-20T15:17:48Z) - DistilDIRE: A Small, Fast, Cheap and Lightweight Diffusion Synthesized Deepfake Detection [2.8934833311559816]
diffusion-generated images pose unique challenges to current detection technologies.
We propose distilling the knowledge embedded in diffusion models to develop rapid deepfake detection models.
Our experimental results indicate an inference speed 3.2 times faster than the existing DIRE framework.
arXiv Detail & Related papers (2024-06-02T20:22:38Z) - Efficient Diffusion Model for Image Restoration by Residual Shifting [63.02725947015132]
This study proposes a novel and efficient diffusion model for image restoration.
Our method avoids the need for post-acceleration during inference, thereby avoiding the associated performance deterioration.
Our method achieves superior or comparable performance to current state-of-the-art methods on three classical IR tasks.
arXiv Detail & Related papers (2024-03-12T05:06:07Z) - Zero-LED: Zero-Reference Lighting Estimation Diffusion Model for Low-Light Image Enhancement [2.9873893715462185]
We propose a novel zero-reference lighting estimation diffusion model for low-light image enhancement called Zero-LED.
It utilizes the stable convergence ability of diffusion models to bridge the gap between low-light domains and real normal-light domains.
It successfully alleviates the dependence on pairwise training data via zero-reference learning.
arXiv Detail & Related papers (2024-03-05T11:39:17Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.