Low-Light Image Enhancement with Normalizing Flow
- URL: http://arxiv.org/abs/2109.05923v1
- Date: Mon, 13 Sep 2021 12:45:08 GMT
- Title: Low-Light Image Enhancement with Normalizing Flow
- Authors: Yufei Wang, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-Pui Chau, Alex
C. Kot
- Abstract summary: In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
- Score: 92.52290821418778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To enhance low-light images to normally-exposed ones is highly ill-posed,
namely that the mapping relationship between them is one-to-many. Previous
works based on the pixel-wise reconstruction losses and deterministic processes
fail to capture the complex conditional distribution of normally exposed
images, which results in improper brightness, residual noise, and artifacts. In
this paper, we investigate to model this one-to-many relationship via a
proposed normalizing flow model. An invertible network that takes the low-light
images/features as the condition and learns to map the distribution of normally
exposed images into a Gaussian distribution. In this way, the conditional
distribution of the normally exposed images can be well modeled, and the
enhancement process, i.e., the other inference direction of the invertible
network, is equivalent to being constrained by a loss function that better
describes the manifold structure of natural images during the training. The
experimental results on the existing benchmark datasets show our method
achieves better quantitative and qualitative results, obtaining better-exposed
illumination, less noise and artifact, and richer colors.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Preserving Image Properties Through Initializations in Diffusion Models [6.804700416902898]
We show that Stable Diffusion methods, as currently applied, do not respect requirements of retail photography.
The usual practice of training the denoiser with a very noisy image leads to inconsistent generated images during inference.
A network trained with centered retail product images with uniform backgrounds generates images with erratic backgrounds.
Our procedure can interact well with other control-based methods to further enhance the controllability of diffusion-based methods.
arXiv Detail & Related papers (2024-01-04T06:55:49Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - On the Robustness of Normalizing Flows for Inverse Problems in Imaging [16.18759484251522]
Unintended severe artifacts are occasionally observed in the output of Conditional normalizing flows.
We empirically and theoretically reveal that these problems are caused by exploding variance'' in the conditional affine coupling layer.
We suggest a simple remedy that substitutes the affine coupling layers with the modified rational quadratic spline coupling layers in normalizing flows.
arXiv Detail & Related papers (2022-12-08T15:18:28Z) - Semi-supervised atmospheric component learning in low-light image
problem [0.0]
Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices.
This study presents a semi-supervised training method using no-reference image quality metrics for low-light image restoration.
arXiv Detail & Related papers (2022-04-15T17:06:33Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.