Denoising Deep Generative Models
- URL: http://arxiv.org/abs/2212.01265v1
- Date: Wed, 30 Nov 2022 19:00:00 GMT
- Title: Denoising Deep Generative Models
- Authors: Gabriel Loaiza-Ganem, Brendan Leigh Ross, Luhuan Wu, John P.
Cunningham, Jesse C. Cresswell, Anthony L. Caterini
- Abstract summary: Likelihood-based deep generative models have been shown to exhibit pathological behaviour under the manifold hypothesis.
We propose two methodologies aimed at addressing this problem.
- Score: 23.19427801594478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Likelihood-based deep generative models have recently been shown to exhibit
pathological behaviour under the manifold hypothesis as a consequence of using
high-dimensional densities to model data with low-dimensional structure. In
this paper we propose two methodologies aimed at addressing this problem. Both
are based on adding Gaussian noise to the data to remove the dimensionality
mismatch during training, and both provide a denoising mechanism whose goal is
to sample from the model as though no noise had been added to the data. Our
first approach is based on Tweedie's formula, and the second on models which
take the variance of added noise as a conditional input. We show that
surprisingly, while well motivated, these approaches only sporadically improve
performance over not adding noise, and that other methods of addressing the
dimensionality mismatch are more empirically adequate.
Related papers
- Iso-Diffusion: Improving Diffusion Probabilistic Models Using the Isotropy of the Additive Gaussian Noise [0.0]
Minimizing the mean squared error between the additive and predicted noise alone does not impose constraints on the predicted noise to be isotropic.
We utilize the isotropy of the additive noise as a constraint on the objective function to enhance the fidelity of DDPMs.
arXiv Detail & Related papers (2024-03-25T14:05:52Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion
Schedule Flaws and Enhancing Low-Frequency Controls [77.42510898755037]
One More Step (OMS) is a compact network that incorporates an additional simple yet effective step during inference.
OMS elevates image fidelity and harmonizes the dichotomy between training and inference, while preserving original model parameters.
Once trained, various pre-trained diffusion models with the same latent domain can share the same OMS module.
arXiv Detail & Related papers (2023-11-27T12:02:42Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - From Denoising Diffusions to Denoising Markov Models [38.33676858989955]
Denoising diffusions are state-of-the-art generative models exhibiting remarkable empirical performance.
We propose a unifying framework generalising this approach to a wide class of spaces and leading to an original extension of score matching.
arXiv Detail & Related papers (2022-11-07T14:34:27Z) - Score-based Denoising Diffusion with Non-Isotropic Gaussian Noise Models [3.136861161060886]
We present the key mathematical derivations for creating denoising diffusion models using an underlying non-isotropic Gaussian noise model.
We also provide initial experiments to help verify that this more general modelling approach can also yield high-quality samples.
arXiv Detail & Related papers (2022-10-21T21:16:46Z) - Noise Distribution Adaptive Self-Supervised Image Denoising using
Tweedie Distribution and Score Matching [29.97769511276935]
We show that Tweedie distributions play key roles in modern deep learning era, leading to a distribution independent self-supervised image denoising formula without clean reference images.
Specifically, by combining with the recent Noise2Score self-supervised image denoising approach and the saddle point approximation of Tweedie distribution, we can provide a general closed-form denoising formula.
We show that the proposed method can accurately estimate noise models and parameters, and provide the state-of-the-art self-supervised image denoising performance in the benchmark dataset and real-world dataset.
arXiv Detail & Related papers (2021-12-05T04:36:08Z) - Estimating High Order Gradients of the Data Distribution by Denoising [81.24581325617552]
First order derivative of a data density can be estimated efficiently by denoising score matching.
We propose a method to directly estimate high order derivatives (scores) of a data density from samples.
arXiv Detail & Related papers (2021-11-08T18:59:23Z) - A Bayesian Approach with Type-2 Student-tMembership Function for T-S
Model Identification [47.25472624305589]
fuzzyc-regression clustering based on type-2 fuzzyset has been shown the remarkable results on non-sparse data.
Aninnovative architecture for fuzzyc-regression model is presented and a novel student-tdistribution based membership functionis designed for sparse data modelling.
arXiv Detail & Related papers (2020-09-02T05:10:13Z) - Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection [54.98042023365694]
We propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples.
The proposed model consists of two sub-models parameterized by neural networks.
arXiv Detail & Related papers (2020-07-23T18:47:36Z) - Generative Modeling with Denoising Auto-Encoders and Langevin Sampling [88.83704353627554]
We show that both DAE and DSM provide estimates of the score of the smoothed population density.
We then apply our results to the homotopy method of arXiv:1907.05600 and provide theoretical justification for its empirical success.
arXiv Detail & Related papers (2020-01-31T23:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.