Denoising Diffusion Post-Processing for Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2303.09627v2
- Date: Sat, 24 Jun 2023 06:03:03 GMT
- Title: Denoising Diffusion Post-Processing for Low-Light Image Enhancement
- Authors: Savvas Panagiotou and Anna S. Bosman
- Abstract summary: Low-light image enhancement (LLIE) techniques attempt to increase the visibility of images captured in low-light scenarios.
LLIE techniques introduce a variety of image degradations such as noise and color bias.
Post-processing denoisers have widely been used, which often yield oversmoothed results lacking detail.
We introduce Low-light Post-processing Diffusion Model (LPDM) in order to model the conditional distribution between under-exposed and normally-exposed images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement (LLIE) techniques attempt to increase the
visibility of images captured in low-light scenarios. However, as a result of
enhancement, a variety of image degradations such as noise and color bias are
revealed. Furthermore, each particular LLIE approach may introduce a different
form of flaw within its enhanced results. To combat these image degradations,
post-processing denoisers have widely been used, which often yield oversmoothed
results lacking detail. We propose using a diffusion model as a post-processing
approach, and we introduce Low-light Post-processing Diffusion Model (LPDM) in
order to model the conditional distribution between under-exposed and
normally-exposed images. We apply LPDM in a manner which avoids the
computationally expensive generative reverse process of typical diffusion
models, and post-process images in one pass through LPDM. Extensive experiments
demonstrate that our approach outperforms competing post-processing denoisers
by increasing the perceptual quality of enhanced low-light images on a variety
of challenging low-light datasets. Source code is available at
https://github.com/savvaki/LPDM.
Related papers
- LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Stimulating the Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling [56.506240377714754]
We present a novel strategy called the Diffusion Model for Image Denoising (DMID)
Our strategy includes an adaptive embedding method that embeds the noisy image into a pre-trained unconditional diffusion model.
Our DMID strategy achieves state-of-the-art performance on both distortion-based and perception-based metrics.
arXiv Detail & Related papers (2023-07-08T14:59:41Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.