On the robustness of minimum-norm interpolators
- URL: http://arxiv.org/abs/2012.00807v2
- Date: Wed, 20 Jan 2021 14:37:55 GMT
- Title: On the robustness of minimum-norm interpolators
- Authors: Geoffrey Chinot, Matthias L\"offler and Sara van de Geer
- Abstract summary: This article develops a general theory for minimum-norm interpolated estimators in linear models in the presence of additive, potentially adversarial, errors.
A quantitative bound for the prediction error is given, relating it to the Rademacher norm of the minimum norm interpolator of the errors and the shape of the subdifferential around the true parameter.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article develops a general theory for minimum-norm interpolated
estimators in linear models in the presence of additive, potentially
adversarial, errors. In particular, no conditions on the errors are imposed. A
quantitative bound for the prediction error is given, relating it to the
Rademacher complexity of the covariates, the norm of the minimum norm
interpolator of the errors and the shape of the subdifferential around the true
parameter. The general theory is illustrated with several examples: the sparse
linear model with minimum $\ell_1$-norm or group Lasso penalty interpolation,
the low rank trace regression model with nuclear norm minimization, and minimum
Euclidean norm interpolation in the linear model. In case of sparsity or
low-rank inducing norms, minimum norm interpolation yields a prediction error
of the order of the average noise level, provided that the overparameterization
is at least a logarithmic factor larger than the number of samples. Lower
bounds that show near optimality of the results complement the analysis.
Related papers
- Generalization error of min-norm interpolators in transfer learning [2.7309692684728617]
Min-norm interpolators emerge naturally as implicit regularized limits of modern machine learning algorithms.
In many applications, a limited amount of test data may be available during training, yet properties of min-norm in this setting are not well-understood.
We establish a novel anisotropic local law to achieve these characterizations.
arXiv Detail & Related papers (2024-06-20T02:23:28Z) - Optimal Differentially Private PCA and Estimation for Spiked Covariance Matrices [10.377683220196873]
Estimating a covariance matrix and its associated principal components is a fundamental problem in contemporary statistics.
We study optimal differentially private Principal Component Analysis (PCA) and covariance estimation within the spiked covariance model.
We propose computationally efficient differentially private estimators and prove their minimax optimality for sub-Gaussian distributions.
arXiv Detail & Related papers (2024-01-08T11:18:14Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and
Benign Overfitting [35.78863301525758]
We prove a generic uniform convergence guarantee on the generalization error of interpolators in an arbitrary hypothesis class.
Applying the generic bound to Euclidean norm balls recovers the consistency result of Bartlett et al. ( 2020) for minimum-norm interpolators.
We show how norm-based generalization bounds can explain and be used to analyze benign overfitting, at least in some settings.
arXiv Detail & Related papers (2021-06-17T06:58:10Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Failures of model-dependent generalization bounds for least-norm
interpolation [39.97534972432276]
We consider bounds on the generalization performance of the least-norm linear regressor.
For a variety of natural joint distributions on training examples, any valid generalization bound must sometimes be very loose.
arXiv Detail & Related papers (2020-10-16T16:30:05Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - On Uniform Convergence and Low-Norm Interpolation Learning [25.96459928769678]
We show that uniformly bounding the difference between empirical and population errors cannot show any learning in the norm ball.
We argue we can explain the consistency of the minimal-norm interpolator with a slightly weaker, yet standard, notion: uniform convergence of zero-error predictors in a norm ball.
arXiv Detail & Related papers (2020-06-10T16:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.