Asymptotics of Ridge Regression in Convolutional Models
- URL: http://arxiv.org/abs/2103.04557v1
- Date: Mon, 8 Mar 2021 05:56:43 GMT
- Title: Asymptotics of Ridge Regression in Convolutional Models
- Authors: Mojtaba Sahraee-Ardakan, Tung Mai, Anup Rao, Ryan Rossi, Sundeep
Rangan, Alyson K. Fletcher
- Abstract summary: We derive exact formulae for estimation error of ridge estimators that hold in a certain high-dimensional regime.
We show the double descent phenomenon in our experiments for convolutional models and show that our theoretical results match the experiments.
- Score: 26.910291664252973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding generalization and estimation error of estimators for simple
models such as linear and generalized linear models has attracted a lot of
attention recently. This is in part due to an interesting observation made in
machine learning community that highly over-parameterized neural networks
achieve zero training error, and yet they are able to generalize well over the
test samples. This phenomenon is captured by the so called double descent
curve, where the generalization error starts decreasing again after the
interpolation threshold. A series of recent works tried to explain such
phenomenon for simple models. In this work, we analyze the asymptotics of
estimation error in ridge estimators for convolutional linear models. These
convolutional inverse problems, also known as deconvolution, naturally arise in
different fields such as seismology, imaging, and acoustics among others. Our
results hold for a large class of input distributions that include i.i.d.
features as a special case. We derive exact formulae for estimation error of
ridge estimators that hold in a certain high-dimensional regime. We show the
double descent phenomenon in our experiments for convolutional models and show
that our theoretical results match the experiments.
Related papers
- Aliasing and Label-Independent Decomposition of Risk: Beyond the bias-variance trade-off [0.0]
A central problem in data science is to use potentially noisy samples to predict function values for unseen inputs.
We introduce an alternative paradigm called the generalized aliasing decomposition (GAD)
GAD can be explicitly calculated from the relationship between model class and samples without seeing any data labels.
arXiv Detail & Related papers (2024-08-15T17:49:24Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Analysis of Interpolating Regression Models and the Double Descent
Phenomenon [3.883460584034765]
It is commonly assumed that models which interpolate noisy training data are poor to generalize.
The best models obtained are overparametrized and the testing error exhibits the double descent behavior as the model order increases.
We derive a result based on the behavior of the smallest singular value of the regression matrix that explains the peak location and the double descent shape of the testing error as a function of model order.
arXiv Detail & Related papers (2023-04-17T09:44:33Z) - Gradient flow in the gaussian covariate model: exact solution of
learning curves and multiple descent structures [14.578025146641806]
We provide a full and unified analysis of the whole time-evolution of the generalization curve.
We show that our theoretical predictions adequately match the learning curves obtained by gradient descent over realistic datasets.
arXiv Detail & Related papers (2022-12-13T17:39:18Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - Model, sample, and epoch-wise descents: exact solution of gradient flow
in the random feature model [16.067228939231047]
We analyze the whole temporal behavior of the generalization and training errors under gradient flow.
We show that in the limit of large system size the full time-evolution path of both errors can be calculated analytically.
Our techniques are based on Cauchy complex integral representations of the errors together with recent random matrix methods based on linear pencils.
arXiv Detail & Related papers (2021-10-22T14:25:54Z) - Predicting Unreliable Predictions by Shattering a Neural Network [145.3823991041987]
Piecewise linear neural networks can be split into subfunctions.
Subfunctions have their own activation pattern, domain, and empirical error.
Empirical error for the full network can be written as an expectation over subfunctions.
arXiv Detail & Related papers (2021-06-15T18:34:41Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Generalization Error of Generalized Linear Models in High Dimensions [25.635225717360466]
We provide a framework to characterize neural networks with arbitrary non-linearities.
We analyze the effect of regular logistic regression on learning.
Our model also captures examples between training and distributions special cases.
arXiv Detail & Related papers (2020-05-01T02:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.