Minimax Semiparametric Learning With Approximate Sparsity
- URL: http://arxiv.org/abs/1912.12213v6
- Date: Mon, 8 Aug 2022 13:21:56 GMT
- Title: Minimax Semiparametric Learning With Approximate Sparsity
- Authors: Jelena Bradic, Victor Chernozhukov, Whitney K. Newey, Yinchu Zhu
- Abstract summary: This paper is about the feasibility and means of root-n consistently estimating linear, mean-square continuous functionals of a high dimensional, approximately sparse regression.
We give lower bounds on the convergence rate of estimators of a regression slope and an average derivative.
We also give debiased machine learners that are root-n consistent under either a minimal approximate sparsity condition or rate double robustness.
- Score: 3.2116198597240846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is about the feasibility and means of root-n consistently
estimating linear, mean-square continuous functionals of a high dimensional,
approximately sparse regression. Such objects include a wide variety of
interesting parameters such as regression coefficients, average derivatives,
and the average treatment effect. We give lower bounds on the convergence rate
of estimators of a regression slope and an average derivative and find that
these bounds are substantially larger than in a low dimensional, semiparametric
setting. We also give debiased machine learners that are root-n consistent
under either a minimal approximate sparsity condition or rate double
robustness. These estimators improve on existing estimators in being root-n
consistent under more general conditions that previously known.
Related papers
- Debiased Nonparametric Regression for Statistical Inference and Distributionally Robustness [10.470114319701576]
We introduce a model-free debiasing method for smooth nonparametric estimators derived from any nonparametric regression approach.
We obtain a debiased estimator with proven pointwise normality and uniform convergence.
arXiv Detail & Related papers (2024-12-28T15:01:19Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - A Bound on the Maximal Marginal Degrees of Freedom [0.0]
This paper addresses low rank approximations and surrogates for kernel ridge regression.
We demonstrate that the computational cost of the most popular low rank approach, which is the Nystr"om method, is almost linear in the sample size.
The result builds on a thorough theoretical analysis of the approximation of elementary kernel functions by elements in the range of the associated integral operator.
arXiv Detail & Related papers (2024-02-20T10:25:44Z) - Batches Stabilize the Minimum Norm Risk in High Dimensional Overparameterized Linear Regression [12.443289202402761]
We show the benefits of batch- partitioning through the lens of a minimum-norm overparametrized linear regression model.
We characterize the optimal batch size and show it is inversely proportional to the noise level.
We also show that shrinking the batch minimum-norm estimator by a factor equal to the Weiner coefficient further stabilizes it and results in lower quadratic risk in all settings.
arXiv Detail & Related papers (2023-06-14T11:02:08Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
Estimator [93.05919133288161]
We show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization.
This provably reduces the mean squared error.
We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
arXiv Detail & Related papers (2020-10-09T22:54:38Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.