Optimal regularizations for data generation with probabilistic graphical
models
- URL: http://arxiv.org/abs/2112.01292v1
- Date: Thu, 2 Dec 2021 14:45:16 GMT
- Title: Optimal regularizations for data generation with probabilistic graphical
models
- Authors: Arnaud Fanthomme (ENS Paris), F Rizzato, S Cocco, R Monasson
- Abstract summary: Empirically, well-chosen regularization schemes dramatically improve the quality of the inferred models.
We consider the particular case of L 2 and L 1 regularizations in the Maximum A Posteriori (MAP) inference of generative pairwise graphical models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the role of regularization is a central question in Statistical
Inference. Empirically, well-chosen regularization schemes often dramatically
improve the quality of the inferred models by avoiding overfitting of the
training data. We consider here the particular case of L 2 and L 1
regularizations in the Maximum A Posteriori (MAP) inference of generative
pairwise graphical models. Based on analytical calculations on Gaussian
multivariate distributions and numerical experiments on Gaussian and Potts
models we study the likelihoods of the training, test, and 'generated data'
(with the inferred models) sets as functions of the regularization strengths.
We show in particular that, at its maximum, the test likelihood and the
'generated' likelihood, which quantifies the quality of the generated samples,
have remarkably close values. The optimal value for the regularization strength
is found to be approximately equal to the inverse sum of the squared couplings
incoming on sites on the underlying network of interactions. Our results seem
largely independent of the structure of the true underlying interactions that
generated the data, of the regularization scheme considered, and are valid when
small fluctuations of the posterior distribution around the MAP estimator are
taken into account. Connections with empirical works on protein models learned
from homologous sequences are discussed.
Related papers
- A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Posterior-Aided Regularization for Likelihood-Free Inference [23.708122045184698]
Posterior-Aided Regularization (PAR) is applicable to learning the density estimator, regardless of the model structure.
We provide a unified estimation method of PAR to estimate both reverse KL term and mutual information term with a single neural network.
arXiv Detail & Related papers (2021-02-15T16:59:30Z) - Binary Classification of Gaussian Mixtures: Abundance of Support
Vectors, Benign Overfitting and Regularization [39.35822033674126]
We study binary linear classification under a generative Gaussian mixture model.
We derive novel non-asymptotic bounds on the classification error of the latter.
Our results extend to a noisy model with constant probability noise flips.
arXiv Detail & Related papers (2020-11-18T07:59:55Z) - Estimating Linear Mixed Effects Models with Truncated Normally
Distributed Random Effects [5.4052819252055055]
Inference can be conducted using maximum likelihood approach if assuming Normal distributions on the random effects.
In this paper we extend the classical (unconstrained) LME models to allow for sign constraints on its overall coefficients.
arXiv Detail & Related papers (2020-11-09T16:17:35Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.