Listening to the Noise: Blind Denoising with Gibbs Diffusion
- URL: http://arxiv.org/abs/2402.19455v2
- Date: Tue, 25 Jun 2024 22:43:54 GMT
- Title: Listening to the Noise: Blind Denoising with Gibbs Diffusion
- Authors: David Heurtel-Depeiges, Charles C. Margossian, Ruben Ohana, Bruno Régaldo-Saint Blancard,
- Abstract summary: We develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the family of noise distributions.
Our theoretical analysis highlights potential pitfalls, guides diagnostic usage, and quantifies errors in the Gibbs stationary distribution.
We showcase our method for 1) blind denoising of natural images involving colored noises with unknown amplitude and spectral index, and 2) a cosmology problem, where Bayesian inference of "noise" parameters means constraining models of the evolution of the Universe.
- Score: 4.310554658046964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, denoising problems have become intertwined with the development of deep generative models. In particular, diffusion models are trained like denoisers, and the distribution they model coincide with denoising priors in the Bayesian picture. However, denoising through diffusion-based posterior sampling requires the noise level and covariance to be known, preventing blind denoising. We overcome this limitation by introducing Gibbs Diffusion (GDiff), a general methodology addressing posterior sampling of both the signal and the noise parameters. Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the family of noise distributions, and a Monte Carlo sampler to infer the noise parameters. Our theoretical analysis highlights potential pitfalls, guides diagnostic usage, and quantifies errors in the Gibbs stationary distribution caused by the diffusion model. We showcase our method for 1) blind denoising of natural images involving colored noises with unknown amplitude and spectral index, and 2) a cosmology problem, namely the analysis of cosmic microwave background data, where Bayesian inference of "noise" parameters means constraining models of the evolution of the Universe.
Related papers
- There and Back Again: On the relation between noises, images, and their inversions in diffusion models [3.5707423185282665]
Diffusion Probabilistic Models (DDPMs) achieve state-of-the-art performance in synthesizing new images from random noise.
Recent DDPM-based editing techniques try to mitigate this issue by inverting images back to their approximated staring noise.
We study the relation between the initial Gaussian noise, the samples generated from it, and their corresponding latent encodings obtained through the inversion procedure.
arXiv Detail & Related papers (2024-10-31T00:30:35Z) - Diffusion Gaussian Mixture Audio Denoise [23.760755498636943]
We propose a DiffGMM model, a denoising model based on the diffusion and Gaussian mixture models.
Given a noisy audio signal, we first apply a 1D-U-Net to extract features and train linear layers to estimate parameters for the Gaussian mixture model.
The noisy signal is continuously subtracted from the estimated noise to output clean audio signals.
arXiv Detail & Related papers (2024-06-13T14:18:10Z) - NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation [86.7260950382448]
We propose a novel approach to correct noise for image validity, NoiseDiffusion.
NoiseDiffusion performs within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.
arXiv Detail & Related papers (2024-03-13T12:32:25Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - SVNR: Spatially-variant Noise Removal with Denoising Diffusion [43.2405873681083]
We present a novel formulation of denoising diffusion that assumes a more realistic, spatially-variant noise model.
In experiments we demonstrate the advantages of our approach over a strong diffusion model baseline, as well as over a state-of-the-art single image denoising method.
arXiv Detail & Related papers (2023-06-28T09:32:00Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Diffusion Model for Generative Image Denoising [17.897180118637856]
In supervised learning for image denoising, usually the paired clean images and noisy images are collected and synthesised to train a denoising model.
In this paper, we regard the denoising task as a problem of estimating the posterior distribution of clean images conditioned on noisy images.
arXiv Detail & Related papers (2023-02-05T14:53:07Z) - CFNet: Conditional Filter Learning with Dynamic Noise Estimation for
Real Image Denoising [37.29552796977652]
This paper considers real noise approximated by heteroscedastic Gaussian/Poisson Gaussian distributions with in-camera signal processing pipelines.
We propose a novel conditional filter in which the optimal kernels for different feature positions can be adaptively inferred by local features from the image and the noise map.
Also, we bring the thought that alternatively performs noise estimation and non-blind denoising into CNN structure, which continuously updates noise prior to guide the iterative feature denoising.
arXiv Detail & Related papers (2022-11-26T14:28:54Z) - Denoising Diffusion Gamma Models [91.22679787578438]
We introduce the Denoising Diffusion Gamma Model (DDGM) and show that noise from Gamma distribution provides improved results for image and speech generation.
Our approach preserves the ability to efficiently sample state in the training diffusion process while using Gamma noise.
arXiv Detail & Related papers (2021-10-10T10:46:31Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z) - Generative Modeling with Denoising Auto-Encoders and Langevin Sampling [88.83704353627554]
We show that both DAE and DSM provide estimates of the score of the smoothed population density.
We then apply our results to the homotopy method of arXiv:1907.05600 and provide theoretical justification for its empirical success.
arXiv Detail & Related papers (2020-01-31T23:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.