Personalized Generative Low-light Image Denoising and Enhancement
- URL: http://arxiv.org/abs/2412.14327v1
- Date: Wed, 18 Dec 2024 20:43:38 GMT
- Title: Personalized Generative Low-light Image Denoising and Enhancement
- Authors: Xijun Wang, Prateek Chennuri, Yu Yuan, Bole Ma, Xingguang Zhang, Stanley Chan,
- Abstract summary: We propose Personalized Generative Denoising (PGD) by building a diffusion model customized for different users.
Our core innovation is an identity-consistent physical buffer that extracts the physical attributes of the person from the gallery.
Over a wide range of low-light testing scenarios, we show that PGD achieves superior image denoising and enhancement performance.
- Score: 3.2423254294855735
- License:
- Abstract: While smartphone cameras today can produce astonishingly good photos, their performance in low light is still not completely satisfactory because of the fundamental limits in photon shot noise and sensor read noise. Generative image restoration methods have demonstrated promising results compared to traditional methods, but they suffer from hallucinatory content generation when the signal-to-noise ratio (SNR) is low. Recognizing the availability of personalized photo galleries on users' smartphones, we propose Personalized Generative Denoising (PGD) by building a diffusion model customized for different users. Our core innovation is an identity-consistent physical buffer that extracts the physical attributes of the person from the gallery. This ID-consistent physical buffer provides a strong prior that can be integrated with the diffusion model to restore the degraded images, without the need of fine-tuning. Over a wide range of low-light testing scenarios, we show that PGD achieves superior image denoising and enhancement performance compared to existing diffusion-based denoising approaches.
Related papers
- Positive2Negative: Breaking the Information-Lossy Barrier in Self-Supervised Single Image Denoising [26.67217493971613]
Existing self-supervised image denoising paradigms rely heavily on information-lossy operations.
We propose a novel self-supervised single image denoising paradigm, Positive2Negative, to break the information-lossy barrier.
Our paradigm achieves state-of-the-art performance in self-supervised single image denoising with significant speed improvements.
arXiv Detail & Related papers (2024-12-21T03:25:01Z) - FreeEnhance: Tuning-Free Image Enhancement via Content-Consistent Noising-and-Denoising Process [120.91393949012014]
FreeEnhance is a framework for content-consistent image enhancement using off-the-shelf image diffusion models.
In the noising stage, FreeEnhance is devised to add lighter noise to the region with higher frequency to preserve the high-frequent patterns in the original image.
In the denoising stage, we present three target properties as constraints to regularize the predicted noise, enhancing images with high acutance and high visual quality.
arXiv Detail & Related papers (2024-09-11T17:58:50Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - SVNR: Spatially-variant Noise Removal with Denoising Diffusion [43.2405873681083]
We present a novel formulation of denoising diffusion that assumes a more realistic, spatially-variant noise model.
In experiments we demonstrate the advantages of our approach over a strong diffusion model baseline, as well as over a state-of-the-art single image denoising method.
arXiv Detail & Related papers (2023-06-28T09:32:00Z) - Robust Deep Ensemble Method for Real-world Image Denoising [62.099271330458066]
We propose a simple yet effective Bayesian deep ensemble (BDE) method for real-world image denoising.
Our BDE achieves +0.28dB PSNR gain over the state-of-the-art denoising method.
Our BDE can be extended to other image restoration tasks, and achieves +0.30dB, +0.18dB and +0.12dB PSNR gains on benchmark datasets.
arXiv Detail & Related papers (2022-06-08T06:19:30Z) - CERL: A Unified Optimization Framework for Light Enhancement with
Realistic Noise [81.47026986488638]
Low-light images captured in the real world are inevitably corrupted by sensor noise.
Existing light enhancement methods either overlook the important impact of real-world noise during enhancement, or treat noise removal as a separate pre- or post-processing step.
We present Coordinated Enhancement for Real-world Low-light Noisy Images (CERL), that seamlessly integrates light enhancement and noise suppression parts into a unified and physics-grounded framework.
arXiv Detail & Related papers (2021-08-01T15:31:15Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.