On the Posterior Distribution in Denoising: Application to Uncertainty
Quantification
- URL: http://arxiv.org/abs/2309.13598v2
- Date: Mon, 19 Feb 2024 10:48:06 GMT
- Title: On the Posterior Distribution in Denoising: Application to Uncertainty
Quantification
- Authors: Hila Manor and Tomer Michaeli
- Abstract summary: Tweedie's formula links the posterior mean in Gaussian denoising with the score of the data distribution.
We show how to efficiently compute the principal components of the posterior distribution for any desired region of an image.
Our method is fast and memory-efficient, as it does not explicitly compute or store the high-order moment tensors.
- Score: 28.233696029453775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Denoisers play a central role in many applications, from noise suppression in
low-grade imaging sensors, to empowering score-based generative models. The
latter category of methods makes use of Tweedie's formula, which links the
posterior mean in Gaussian denoising (\ie the minimum MSE denoiser) with the
score of the data distribution. Here, we derive a fundamental relation between
the higher-order central moments of the posterior distribution, and the
higher-order derivatives of the posterior mean. We harness this result for
uncertainty quantification of pre-trained denoisers. Particularly, we show how
to efficiently compute the principal components of the posterior distribution
for any desired region of an image, as well as to approximate the full marginal
distribution along those (or any other) one-dimensional directions. Our method
is fast and memory-efficient, as it does not explicitly compute or store the
high-order moment tensors and it requires no training or fine tuning of the
denoiser. Code and examples are available on the project webpage in
https://hilamanor.github.io/GaussianDenoisingPosterior/ .
Related papers
- Integrating Reweighted Least Squares with Plug-and-Play Diffusion Priors for Noisy Image Restoration [6.402777145722335]
We propose a plug-and-play image restoration framework based on generative diffusion priors for robust removal of general noise types, including impulse noise.<n> Experimental results on benchmark datasets demonstrate that the proposed method effectively removes non-Gaussian impulse noise and achieves superior restoration performance.
arXiv Detail & Related papers (2025-11-10T08:11:20Z) - Noise Conditional Variational Score Distillation [60.38982038894823]
Noise Conditional Variational Score Distillation (NCVSD) is a novel method for distilling pretrained diffusion models into generative denoisers.<n>By integrating this insight into the Variational Score Distillation framework, we enable scalable learning of generative denoisers.
arXiv Detail & Related papers (2025-06-11T06:01:39Z) - FreSca: Unveiling the Scaling Space in Diffusion Models [52.20473039489599]
Diffusion models offer impressive controllability for image tasks, primarily through noise predictions that encode task-specific information and guidance enabling adjustable scaling.
We investigate this space, starting with inversion-based editing where the difference between conditional/unconditional noise predictions carries key semantic information.
Our core contribution stems from a Fourier analysis of noise predictions, revealing that its low- and high-frequency components evolve differently throughout diffusion.
Based on this insight, we introduce FreSca, a straightforward method that applies guidance scaling independently to different frequency bands in the Fourier domain.
arXiv Detail & Related papers (2025-04-02T22:03:11Z) - Score-Based Turbo Message Passing for Plug-and-Play Compressive Image Recovery [24.60447255507278]
Off-the-shelf image denoisers mostly rely on some generic or hand-crafted priors for denoising.
We devise a message passing framework that integrates a score-based minimum mean squared error (MMSE) denoiser for compressive image recovery.
arXiv Detail & Related papers (2025-03-28T04:30:58Z) - Robust Representation Consistency Model via Contrastive Denoising [83.47584074390842]
randomized smoothing provides theoretical guarantees for certifying robustness against adversarial perturbations.
diffusion models have been successfully employed for randomized smoothing to purify noise-perturbed samples.
We reformulate the generative modeling task along the diffusion trajectories in pixel space as a discriminative task in the latent space.
arXiv Detail & Related papers (2025-01-22T18:52:06Z) - There and Back Again: On the relation between Noise and Image Inversions in Diffusion Models [3.5707423185282665]
Inversion-based methods map each image back to its approximated starting noise.<n>We show that latents exhibit structural patterns in the form of less diverse noise predicted for smooth image regions.<n>We propose to replace the first DDIM Inversion steps with a forward diffusion process, which successfully decorrelates latent encodings.
arXiv Detail & Related papers (2024-10-31T00:30:35Z) - One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion
Schedule Flaws and Enhancing Low-Frequency Controls [77.42510898755037]
One More Step (OMS) is a compact network that incorporates an additional simple yet effective step during inference.
OMS elevates image fidelity and harmonizes the dichotomy between training and inference, while preserving original model parameters.
Once trained, various pre-trained diffusion models with the same latent domain can share the same OMS module.
arXiv Detail & Related papers (2023-11-27T12:02:42Z) - Direct Unsupervised Denoising [60.71146161035649]
Unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate.
We present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency.
arXiv Detail & Related papers (2023-10-27T13:02:12Z) - Batch-less stochastic gradient descent for compressive learning of deep
regularization for image denoising [0.0]
We consider the problem of denoising with the help of prior information taken from a database of clean signals or images.
With deep neural networks (DNN), complex distributions can be recovered from a large training database.
We propose two variants of gradient descent (SGD) for the recovery of deep regularization parameters.
arXiv Detail & Related papers (2023-10-02T11:46:11Z) - Score Priors Guided Deep Variational Inference for Unsupervised
Real-World Single Image Denoising [14.486289176696438]
We propose a score priors-guided deep variational inference, namely ScoreDVI, for practical real-world denoising.
We exploit a Non-$i.i.d$ Gaussian mixture model and variational noise posterior to model the real-world noise.
Our method outperforms other single image-based real-world denoising methods and achieves comparable performance to dataset-based unsupervised methods.
arXiv Detail & Related papers (2023-08-09T03:26:58Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Heavy-tailed denoising score matching [5.371337604556311]
We develop an iterative noise scaling algorithm to consistently initialise the multiple levels of noise in Langevin dynamics.
On the practical side, our use of heavy-tailed DSM leads to improved score estimation, controllable sampling convergence, and more balanced unconditional generative performance for imbalanced datasets.
arXiv Detail & Related papers (2021-12-17T22:04:55Z) - Noise Distribution Adaptive Self-Supervised Image Denoising using
Tweedie Distribution and Score Matching [29.97769511276935]
We show that Tweedie distributions play key roles in modern deep learning era, leading to a distribution independent self-supervised image denoising formula without clean reference images.
Specifically, by combining with the recent Noise2Score self-supervised image denoising approach and the saddle point approximation of Tweedie distribution, we can provide a general closed-form denoising formula.
We show that the proposed method can accurately estimate noise models and parameters, and provide the state-of-the-art self-supervised image denoising performance in the benchmark dataset and real-world dataset.
arXiv Detail & Related papers (2021-12-05T04:36:08Z) - Estimating High Order Gradients of the Data Distribution by Denoising [81.24581325617552]
First order derivative of a data density can be estimated efficiently by denoising score matching.
We propose a method to directly estimate high order derivatives (scores) of a data density from samples.
arXiv Detail & Related papers (2021-11-08T18:59:23Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.