GuidNoise: Single-Pair Guided Diffusion for Generalized Noise Synthesis
- URL: http://arxiv.org/abs/2512.04456v1
- Date: Thu, 04 Dec 2025 05:00:00 GMT
- Title: GuidNoise: Single-Pair Guided Diffusion for Generalized Noise Synthesis
- Authors: Changjin Kim, HyeokJun Lee, YoungJoon Yoo,
- Abstract summary: Single-Pair Guided Diffusion for generalized noise synthesis GuidNoise.<n>GuidNoise uses a single noisy/clean pair as the guidance, often easily obtained by itself within a training set.<n>Uses a guidance-aware affine feature modification (GAFM) and a noise-aware refine loss to leverage the inherent potential of diffusion models.
- Score: 9.253859022117306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent image denoising methods have leveraged generative modeling for real noise synthesis to address the costly acquisition of real-world noisy data. However, these generative models typically require camera metadata and extensive target-specific noisy-clean image pairs, often showing limited generalization between settings. In this paper, to mitigate the prerequisites, we propose a Single-Pair Guided Diffusion for generalized noise synthesis GuidNoise, which uses a single noisy/clean pair as the guidance, often easily obtained by itself within a training set. To train GuidNoise, which generates synthetic noisy images from the guidance, we introduce a guidance-aware affine feature modification (GAFM) and a noise-aware refine loss to leverage the inherent potential of diffusion models. This loss function refines the diffusion model's backward process, making the model more adept at generating realistic noise distributions. The GuidNoise synthesizes high-quality noisy images under diverse noise environments without additional metadata during both training and inference. Additionally, GuidNoise enables the efficient generation of noisy-clean image pairs at inference time, making synthetic noise readily applicable for augmenting training data. This self-augmentation significantly improves denoising performance, especially in practical scenarios with lightweight models and limited training data. The code is available at https://github.com/chjinny/GuidNoise.
Related papers
- 2-Shots in the Dark: Low-Light Denoising with Minimal Data Acquisition [24.81422645983973]
Learning-based denoisers have the potential to reconstruct high-quality images.<n>For training, these denoisers require large paired datasets of clean and noisy images.<n>Noise synthesis is an alternative to large-scale data acquisition.
arXiv Detail & Related papers (2025-12-02T21:32:31Z) - Lightweight Data-Free Denoising for Detail-Preserving Biomedical Image Restoration [5.07046926436163]
Current self-supervised denoising techniques achieve impressive results, yet their real-world application is frequently constrained by substantial computational and memory demands.<n>We present an ultra-lightweight model that achieves both fast denoising and high quality image restoration.
arXiv Detail & Related papers (2025-10-17T12:59:21Z) - Mitigating the Noise Shift for Denoising Generative Models via Noise Awareness Guidance [54.88271057438763]
Noise Awareness Guidance (NAG) is a correction method that explicitly steers sampling trajectories to remain consistent with the pre-defined noise schedule.<n>NAG consistently mitigates noise shift and substantially improves the generation quality of mainstream diffusion models.
arXiv Detail & Related papers (2025-10-14T13:31:34Z) - Dark Noise Diffusion: Noise Synthesis for Low-Light Image Denoising [22.897202020483576]
Low-light photography produces images with low signal-to-noise ratios due to limited photons.<n>Deep-learning methods perform well, but they require large datasets of paired images that are impractical to acquire.<n>In this paper, we investigate the ability of diffusion models to capture the complex distribution of low-light noise.
arXiv Detail & Related papers (2025-03-14T10:16:54Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Realistic Noise Synthesis with Diffusion Models [44.404059914652194]
Deep denoising models require extensive real-world training data, which is challenging to acquire.<n>We propose a novel Realistic Noise Synthesis Diffusor (RNSD) method using diffusion models to address these challenges.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - Noise2NoiseFlow: Realistic Camera Noise Modeling without Clean Images [35.29066692454865]
This paper proposes a framework for training a noise model and a denoiser simultaneously.
It relies on pairs of noisy images rather than noisy/clean paired image data.
The trained denoiser is shown to significantly improve upon both supervised and weakly supervised baseline denoising approaches.
arXiv Detail & Related papers (2022-06-02T15:31:40Z) - Learning to Generate Realistic Noisy Images via Pixel-level Noise-aware
Adversarial Training [50.018580462619425]
We propose a novel framework, namely Pixel-level Noise-aware Generative Adrial Network (PNGAN)
PNGAN employs a pre-trained real denoiser to map the fake and real noisy images into a nearly noise-free solution space.
For better noise fitting, we present an efficient architecture Simple Multi-versa-scale Network (SMNet) as the generator.
arXiv Detail & Related papers (2022-04-06T14:09:02Z) - Estimating Fine-Grained Noise Model via Contrastive Learning [11.626812663592416]
We propose an innovative noise model estimation and noise synthesis pipeline for realistic noisy image generation.
Our model learns a noise estimation model with fine-grained statistical noise model in a contrastive manner.
By calibrating noise models of several sensors, our model can be extended to predict other cameras.
arXiv Detail & Related papers (2022-04-03T02:35:01Z) - C2N: Practical Generative Noise Modeling for Real-World Denoising [53.96391787869974]
We introduce a Clean-to-Noisy image generation framework, namely C2N, to imitate complex real-world noise without using paired examples.
We construct the noise generator in C2N accordingly with each component of real-world noise characteristics to express a wide range of noise accurately.
arXiv Detail & Related papers (2022-02-19T05:53:46Z) - Adaptive noise imitation for image denoising [58.21456707617451]
We develop a new textbfadaptive noise imitation (ADANI) algorithm that can synthesize noisy data from naturally noisy images.
To produce realistic noise, a noise generator takes unpaired noisy/clean images as input, where the noisy image is a guide for noise generation.
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
arXiv Detail & Related papers (2020-11-30T02:49:36Z) - Dual Adversarial Network: Toward Real-world Noise Removal and Noise
Generation [52.75909685172843]
Real-world image noise removal is a long-standing yet very challenging task in computer vision.
We propose a novel unified framework to deal with the noise removal and noise generation tasks.
Our method learns the joint distribution of the clean-noisy image pairs.
arXiv Detail & Related papers (2020-07-12T09:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.