Minimax Semiparametric Learning With Approximate Sparsity
- URL: http://arxiv.org/abs/1912.12213v7
- Date: Thu, 31 Jul 2025 18:34:27 GMT
- Title: Minimax Semiparametric Learning With Approximate Sparsity
- Authors: Jelena Bradic, Victor Chernozhukov, Whitney K. Newey, Yinchu Zhu,
- Abstract summary: This paper formalizes the concept of approximate model sparsity through classical semi-parametric theory.<n>We derive minimax rates for a regression slope and an average derivative, finding these bounds to be substantially larger than those in low-dimensional, semi-parametric settings.
- Score: 3.5136198842746524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating linear, mean-square continuous functionals is a pivotal challenge in statistics. In high-dimensional contexts, this estimation is often performed under the assumption of exact model sparsity, meaning that only a small number of parameters are precisely non-zero. This excludes models where linear formulations only approximate the underlying data distribution, such as nonparametric regression methods that use basis expansion such as splines, kernel methods or polynomial regressions. Many recent methods for root-$n$ estimation have been proposed, but the implications of exact model sparsity remain largely unexplored. In particular, minimax optimality for models that are not exactly sparse has not yet been developed. This paper formalizes the concept of approximate sparsity through classical semi-parametric theory. We derive minimax rates under this formulation for a regression slope and an average derivative, finding these bounds to be substantially larger than those in low-dimensional, semi-parametric settings. We identify several new phenomena. We discover new regimes where rate double robustness does not hold, yet root-$n$ estimation is still possible. In these settings, we propose an estimator that achieves minimax optimal rates. Our findings further reveal distinct optimality boundaries for ordered versus unordered nonparametric regression estimation.
Related papers
- Local Polynomial Lp-norm Regression [0.0]
Local least squares estimation cannot provide optimal results when non-Gaussian noise is present.<n>It is suggested that $L_p$-norm estimators be used to minimize the residuals when these exhibit non-normal kurtosis.<n>We show our method's superiority over local least squares in one-dimensional data and show promising outcomes for higher dimensions, specifically in 2D.
arXiv Detail & Related papers (2025-04-25T21:04:19Z) - Debiased Nonparametric Regression for Statistical Inference and Distributionally Robustness [10.470114319701576]
We introduce a model-free debiasing method for smooth nonparametric regression estimators.
We obtain a debiased estimator that satisfies pointwise and uniform risk convergence, along with smoothness, under mild conditions.
arXiv Detail & Related papers (2024-12-28T15:01:19Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - A Bound on the Maximal Marginal Degrees of Freedom [0.0]
Common kernel ridge regression is expensive in memory allocation and computation time.
This paper addresses low rank approximations and surrogates for kernel ridge regression.
arXiv Detail & Related papers (2024-02-20T10:25:44Z) - Batches Stabilize the Minimum Norm Risk in High Dimensional Overparameterized Linear Regression [12.443289202402761]
We show the benefits of batch- partitioning through the lens of a minimum-norm overparametrized linear regression model.
We characterize the optimal batch size and show it is inversely proportional to the noise level.
We also show that shrinking the batch minimum-norm estimator by a factor equal to the Weiner coefficient further stabilizes it and results in lower quadratic risk in all settings.
arXiv Detail & Related papers (2023-06-14T11:02:08Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Interpolating Discriminant Functions in High-Dimensional Gaussian Latent
Mixtures [1.4213973379473654]
This paper considers binary classification of high-dimensional features under a postulated model.
A generalized least squares estimator is used to estimate the direction of the optimal separating hyperplane.
arXiv Detail & Related papers (2022-10-25T21:19:50Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
Estimator [93.05919133288161]
We show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization.
This provably reduces the mean squared error.
We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
arXiv Detail & Related papers (2020-10-09T22:54:38Z) - Fundamental Limits of Ridge-Regularized Empirical Risk Minimization in
High Dimensions [41.7567932118769]
Empirical Risk Minimization algorithms are widely used in a variety of estimation and prediction tasks.
In this paper, we characterize for the first time the fundamental limits on the statistical accuracy of convex ERM for inference.
arXiv Detail & Related papers (2020-06-16T04:27:38Z) - Robust subgaussian estimation with VC-dimension [0.0]
This work proposes a new general way to bound the excess risk for MOM estimators.
The core technique is the use of VC-dimension (instead of Rademacher complexity) to measure the statistical complexity.
arXiv Detail & Related papers (2020-04-24T13:21:09Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.