Failures of model-dependent generalization bounds for least-norm
interpolation
- URL: http://arxiv.org/abs/2010.08479v3
- Date: Wed, 20 Jan 2021 17:05:24 GMT
- Title: Failures of model-dependent generalization bounds for least-norm
interpolation
- Authors: Peter L. Bartlett and Philip M. Long
- Abstract summary: We consider bounds on the generalization performance of the least-norm linear regressor.
For a variety of natural joint distributions on training examples, any valid generalization bound must sometimes be very loose.
- Score: 39.97534972432276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider bounds on the generalization performance of the least-norm linear
regressor, in the over-parameterized regime where it can interpolate the data.
We describe a sense in which any generalization bound of a type that is
commonly proved in statistical learning theory must sometimes be very loose
when applied to analyze the least-norm interpolant. In particular, for a
variety of natural joint distributions on training examples, any valid
generalization bound that depends only on the output of the learning algorithm,
the number of training examples, and the confidence parameter, and that
satisfies a mild condition (substantially weaker than monotonicity in sample
size), must sometimes be very loose -- it can be bounded below by a constant
when the true excess risk goes to zero.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.