Benign overfitting and adaptive nonparametric regression
- URL: http://arxiv.org/abs/2206.13347v1
- Date: Mon, 27 Jun 2022 14:50:14 GMT
- Title: Benign overfitting and adaptive nonparametric regression
- Authors: Julien Chhor, Suzanne Sigalla and Alexandre B. Tsybakov
- Abstract summary: We construct an estimator which is a continuous function interpolating the data points with high probability.
We attain minimax optimal rates under mean squared risk on the scale of H"older classes adaptively to the unknown smoothness.
- Score: 71.70323672531606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the nonparametric regression setting, we construct an estimator which is a
continuous function interpolating the data points with high probability, while
attaining minimax optimal rates under mean squared risk on the scale of
H\"older classes adaptively to the unknown smoothness.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Stochastic Gradient Descent for Nonparametric Regression [11.24895028006405]
This paper introduces an iterative algorithm for training nonparametric additive models.
We show that the resulting inequality satisfies an oracle that allows for model mis-specification.
arXiv Detail & Related papers (2024-01-01T08:03:52Z) - Adaptive and non-adaptive minimax rates for weighted Laplacian-eigenmap
based nonparametric regression [14.003044924094597]
We show both adaptive and non-adaptive minimax rates of convergence for a family of weighted Laplacian-Eigenmap based nonparametric regression methods.
arXiv Detail & Related papers (2023-10-31T20:25:36Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Interpolating Discriminant Functions in High-Dimensional Gaussian Latent
Mixtures [1.4213973379473654]
This paper considers binary classification of high-dimensional features under a postulated model.
A generalized least squares estimator is used to estimate the direction of the optimal separating hyperplane.
arXiv Detail & Related papers (2022-10-25T21:19:50Z) - Improving Generalization via Uncertainty Driven Perturbations [107.45752065285821]
We consider uncertainty-driven perturbations of the training data points.
Unlike loss-driven perturbations, uncertainty-guided perturbations do not cross the decision boundary.
We show that UDP is guaranteed to achieve the robustness margin decision on linear models.
arXiv Detail & Related papers (2022-02-11T16:22:08Z) - Optimal Rates for Random Order Online Optimization [60.011653053877126]
We study the citetgarber 2020online, where the loss functions may be chosen by an adversary, but are then presented online in a uniformly random order.
We show that citetgarber 2020online algorithms achieve the optimal bounds and significantly improve their stability.
arXiv Detail & Related papers (2021-06-29T09:48:46Z) - Distribution-Free Robust Linear Regression [5.532477732693]
We study random design linear regression with no assumptions on the distribution of the covariates.
We construct a non-linear estimator achieving excess risk of order $d/n$ with the optimal sub-exponential tail.
We prove an optimal version of the classical bound for the truncated least squares estimator due to Gy"orfi, Kohler, Krzyzak, and Walk.
arXiv Detail & Related papers (2021-02-25T15:10:41Z) - Support estimation in high-dimensional heteroscedastic mean regression [2.28438857884398]
We consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors.
We use a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem.
For the resulting estimator we show sign-consistency and optimal rates of convergence in the $ell_infty$ norm.
arXiv Detail & Related papers (2020-11-03T09:46:31Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.