On Noise Injection in Generative Adversarial Networks
- URL: http://arxiv.org/abs/2006.05891v3
- Date: Sat, 22 May 2021 09:52:40 GMT
- Title: On Noise Injection in Generative Adversarial Networks
- Authors: Ruili Feng, Deli Zhao, Zhengjun Zha
- Abstract summary: Noise injection has been proved to be one of the key technique advances in generating high-fidelity images.
We propose a geometric framework to theoretically analyze the role of noise injection in GANs.
- Score: 85.51169466453646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Noise injection has been proved to be one of the key technique advances in
generating high-fidelity images. Despite its successful usage in GANs, the
mechanism of its validity is still unclear. In this paper, we propose a
geometric framework to theoretically analyze the role of noise injection in
GANs. Based on Riemannian geometry, we successfully model the noise injection
framework as fuzzy equivalence on the geodesic normal coordinates. Guided by
our theories, we find that the existing method is incomplete and a new strategy
for noise injection is devised. Experiments on image generation and GAN
inversion demonstrate the superiority of our method.
Related papers
- Correntropy-Based Improper Likelihood Model for Robust Electrophysiological Source Imaging [18.298620404141047]
Existing source imaging algorithms utilize the Gaussian assumption for the observation noise to build the likelihood function for Bayesian inference.
The electromagnetic measurements of brain activity are usually affected by miscellaneous artifacts, leading to a potentially non-Gaussian distribution for the observation noise.
We propose a new likelihood model which is robust with respect to non-Gaussian noises.
arXiv Detail & Related papers (2024-08-27T07:54:15Z) - Causal Discovery with Score Matching on Additive Models with Arbitrary
Noise [37.13308785728276]
Causal discovery methods are intrinsically constrained by the set of assumptions needed to ensure structure identifiability.
In this paper we show the shortcomings of inference under this hypothesis, analyzing the risk of edge inversion under violation of Gaussianity of the noise terms.
We propose a novel method for inferring the topological ordering of the variables in the causal graph, from data generated according to an additive non-linear model with a generic noise distribution.
This leads to NoGAM, a causal discovery algorithm with a minimal set of assumptions and state of the art performance, experimentally benchmarked on synthetic data.
arXiv Detail & Related papers (2023-04-06T17:50:46Z) - Riemannian Score-Based Generative Modeling [56.20669989459281]
We introduce score-based generative models (SGMs) demonstrating remarkable empirical performance.
Current SGMs make the underlying assumption that the data is supported on a Euclidean manifold with flat geometry.
This prevents the use of these models for applications in robotics, geoscience or protein modeling.
arXiv Detail & Related papers (2022-02-06T11:57:39Z) - Unsupervised Single Image Super-resolution Under Complex Noise [60.566471567837574]
This paper proposes a model-based unsupervised SISR method to deal with the general SISR task with unknown degradations.
The proposed method can evidently surpass the current state of the art (SotA) method (about 1dB PSNR) not only with a slighter model (0.34M vs. 2.40M) but also faster speed.
arXiv Detail & Related papers (2021-07-02T11:55:40Z) - Guided Integrated Gradients: An Adaptive Path Method for Removing Noise [9.792727625917083]
Integrated Gradients (IG) is a commonly used feature attribution method for deep neural networks.
We show that one of the causes of the problem is the accumulation of noise along the IG path.
We propose adapting the attribution path itself -- conditioning the path not just on the image but also on the model being explained.
arXiv Detail & Related papers (2021-06-17T20:00:55Z) - Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections [73.95786440318369]
We focus on the so-called implicit effect' of GNIs, which is the effect of the injected noise on the dynamics of gradient descent (SGD)
We show that this effect induces an asymmetric heavy-tailed noise on gradient updates.
We then formally prove that GNIs induce an implicit bias', which varies depending on the heaviness of the tails and the level of asymmetry.
arXiv Detail & Related papers (2021-02-13T21:28:09Z) - Statistical Analysis of Signal-Dependent Noise: Application in Blind
Localization of Image Splicing Forgery [20.533239616846874]
In this work, we apply signal-dependent noise (SDN) to splicing localization tasks.
By building a maximum a posterior Markov random field (MAP-MRF) framework, we exploit the likelihood of noise to reveal the alien region of spliced objects.
Experimental results demonstrate that our method is effective and provides a comparative localization performance.
arXiv Detail & Related papers (2020-10-30T11:53:53Z) - Depth image denoising using nuclear norm and learning graph model [107.51199787840066]
Group-based image restoration methods are more effective in gathering the similarity among patches.
For each patch, we find and group the most similar patches within a searching window.
The proposed method is superior to other current state-of-the-art denoising methods in both subjective and objective criterion.
arXiv Detail & Related papers (2020-08-09T15:12:16Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.