Diffusion-Based sRGB Real Noise Generation via Prompt-Driven Noise Representation Learning
- URL: http://arxiv.org/abs/2603.04870v1
- Date: Thu, 05 Mar 2026 06:54:38 GMT
- Title: Diffusion-Based sRGB Real Noise Generation via Prompt-Driven Noise Representation Learning
- Authors: Jaekyun Ko, Dongjin Kim, Soomin Lee, Guanghui Wang, Tae Hyun Kim,
- Abstract summary: We propose a novel framework called Prompt-Driven Noise Generation (PNG)<n>This model is capable of acquiring high-dimensional prompt features that capture the characteristics of real-world input noise.<n>By eliminating the dependency on explicit camera metadata, our approach significantly enhances the generalizability and applicability of noise synthesis.
- Score: 16.09820578603153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Denoising in the sRGB image space is challenging due to noise variability. Although end-to-end methods perform well, their effectiveness in real-world scenarios is limited by the scarcity of real noisy-clean image pairs, which are expensive and difficult to collect. To address this limitation, several generative methods have been developed to synthesize realistic noisy images from limited data. These generative approaches often rely on camera metadata during both training and testing to synthesize real-world noise. However, the lack of metadata or inconsistencies between devices restricts their usability. Therefore, we propose a novel framework called Prompt-Driven Noise Generation (PNG). This model is capable of acquiring high-dimensional prompt features that capture the characteristics of real-world input noise and creating a variety of realistic noisy images consistent with the distribution of the input noise. By eliminating the dependency on explicit camera metadata, our approach significantly enhances the generalizability and applicability of noise synthesis. Comprehensive experiments reveal that our model effectively produces realistic noisy images and show the successful application of these generated images in removing real-world noise across various benchmark datasets.
Related papers
- GuidNoise: Single-Pair Guided Diffusion for Generalized Noise Synthesis [9.253859022117306]
Single-Pair Guided Diffusion for generalized noise synthesis GuidNoise.<n>GuidNoise uses a single noisy/clean pair as the guidance, often easily obtained by itself within a training set.<n>Uses a guidance-aware affine feature modification (GAFM) and a noise-aware refine loss to leverage the inherent potential of diffusion models.
arXiv Detail & Related papers (2025-12-04T05:00:00Z) - NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation [86.7260950382448]
We propose a novel approach to correct noise for image validity, NoiseDiffusion.
NoiseDiffusion performs within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.
arXiv Detail & Related papers (2024-03-13T12:32:25Z) - Realistic Noise Synthesis with Diffusion Models [44.404059914652194]
Deep denoising models require extensive real-world training data, which is challenging to acquire.<n>We propose a novel Realistic Noise Synthesis Diffusor (RNSD) method using diffusion models to address these challenges.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - NoiseTransfer: Image Noise Generation with Contrastive Embeddings [9.322843611215486]
We propose a new generative model that can synthesize noisy images with multiple different noise distributions.
We adopt recent contrastive learning to learn distinguishable latent features of the noise.
Our model can generate new noisy images by transferring the noise characteristics solely from a single reference noisy image.
arXiv Detail & Related papers (2023-01-31T11:09:15Z) - Learning to Generate Realistic Noisy Images via Pixel-level Noise-aware
Adversarial Training [50.018580462619425]
We propose a novel framework, namely Pixel-level Noise-aware Generative Adrial Network (PNGAN)
PNGAN employs a pre-trained real denoiser to map the fake and real noisy images into a nearly noise-free solution space.
For better noise fitting, we present an efficient architecture Simple Multi-versa-scale Network (SMNet) as the generator.
arXiv Detail & Related papers (2022-04-06T14:09:02Z) - C2N: Practical Generative Noise Modeling for Real-World Denoising [53.96391787869974]
We introduce a Clean-to-Noisy image generation framework, namely C2N, to imitate complex real-world noise without using paired examples.
We construct the noise generator in C2N accordingly with each component of real-world noise characteristics to express a wide range of noise accurately.
arXiv Detail & Related papers (2022-02-19T05:53:46Z) - Rethinking Noise Synthesis and Modeling in Raw Denoising [75.55136662685341]
We introduce a new perspective to synthesize noise by directly sampling from the sensor's real noise.
It inherently generates accurate raw image noise for different camera sensors.
arXiv Detail & Related papers (2021-10-10T10:45:24Z) - Physics-based Noise Modeling for Extreme Low-light Photography [63.65570751728917]
We study the noise statistics in the imaging pipeline of CMOS photosensors.
We formulate a comprehensive noise model that can accurately characterize the real noise structures.
Our noise model can be used to synthesize realistic training data for learning-based low-light denoising algorithms.
arXiv Detail & Related papers (2021-08-04T16:36:29Z) - Adaptive noise imitation for image denoising [58.21456707617451]
We develop a new textbfadaptive noise imitation (ADANI) algorithm that can synthesize noisy data from naturally noisy images.
To produce realistic noise, a noise generator takes unpaired noisy/clean images as input, where the noisy image is a guide for noise generation.
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
arXiv Detail & Related papers (2020-11-30T02:49:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.