Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD
- URL: http://arxiv.org/abs/2110.13750v1
- Date: Tue, 26 Oct 2021 15:02:27 GMT
- Title: Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD
- Authors: Bohan Wang, Huishuai Zhang, Jieyu Zhang, Qi Meng, Wei Chen, Tie-Yan
Liu
- Abstract summary: We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
- Score: 73.55632827932101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the information-theoretical framework has been proven to be able to
obtain non-vacuous generalization bounds for large models trained by Stochastic
Gradient Langevin Dynamics (SGLD) with isotropic noise. In this paper, we
optimize the information-theoretical generalization bound by manipulating the
noise structure in SGLD. We prove that with constraint to guarantee low
empirical risk, the optimal noise covariance is the square root of the expected
gradient covariance if both the prior and the posterior are jointly optimized.
This validates that the optimal noise is quite close to the empirical gradient
covariance. Technically, we develop a new information-theoretical bound that
enables such an optimization analysis. We then apply matrix analysis to derive
the form of optimal noise covariance. Presented constraint and results are
validated by the empirical observations.
Related papers
- Bayes-optimal limits in structured PCA, and how to reach them [21.3083877172595]
We study the paradigmatic matrix model of principal components analysis (PCA), where a rank-one matrix is corrupted by additive noise.
We provide the first characterization of the Bayes-optimal limits of inference in this model.
We propose a novel approximate message passing algorithm (AMP), inspired by the theory of Adaptive Thouless-Anderson-Palmer equations.
arXiv Detail & Related papers (2022-10-03T21:31:41Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Heavy-tailed denoising score matching [5.371337604556311]
We develop an iterative noise scaling algorithm to consistently initialise the multiple levels of noise in Langevin dynamics.
On the practical side, our use of heavy-tailed DSM leads to improved score estimation, controllable sampling convergence, and more balanced unconditional generative performance for imbalanced datasets.
arXiv Detail & Related papers (2021-12-17T22:04:55Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Noisy Gradient Descent Converges to Flat Minima for Nonconvex Matrix
Factorization [36.182992409810446]
This paper investigates the importance of noise in non optimization problems.
We show that gradient descent can converge to any global form that converges to a global bias that is determined by the injected noise.
arXiv Detail & Related papers (2021-02-24T17:50:17Z) - Sample-Optimal PAC Learning of Halfspaces with Malicious Noise [4.8728183994912415]
We study efficient PAC learning of halfspaces in $mathRd$ in the presence of malicious noise of Valiant(1985)
We present a new analysis for the algorithm of Awasthi et al.( 2017) and show that it essentially achieves the near-optimal sample complexity bound of $tildeO(d)$.
We extend the algorithm and analysis to the more general and stronger nasty noise model of Bbbshoutyetal (2002), showing that it is still possible to achieve near-optimal noise tolerance and sample complexity in time.
arXiv Detail & Related papers (2021-02-11T20:18:20Z) - Beyond variance reduction: Understanding the true impact of baselines on
policy optimization [24.09670734037029]
We show that learning dynamics are governed by the curvature of the loss function and the noise of the gradient estimates.
We present theoretical results showing that, at least for bandit problems, curvature and noise are not sufficient to explain the learning dynamics.
arXiv Detail & Related papers (2020-08-31T17:52:09Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.