Risk of the Least Squares Minimum Norm Estimator under the Spike
Covariance Model
- URL: http://arxiv.org/abs/1912.13421v2
- Date: Tue, 18 Feb 2020 08:26:02 GMT
- Title: Risk of the Least Squares Minimum Norm Estimator under the Spike
Covariance Model
- Authors: Yasaman Mahdaviyeh, Zacharie Naulet
- Abstract summary: We study risk of the minimum norm linear least squares estimator in when the number of parameters $d$ depends on $n$, and $fracdn rightarrow infty$.
We show that in this setting risk of minimum norm least squares estimator vanishes in compare to risk of the null estimator.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study risk of the minimum norm linear least squares estimator in when the
number of parameters $d$ depends on $n$, and $\frac{d}{n} \rightarrow \infty$.
We assume that data has an underlying low rank structure by restricting
ourselves to spike covariance matrices, where a fixed finite number of
eigenvalues grow with $n$ and are much larger than the rest of the eigenvalues,
which are (asymptotically) in the same order. We show that in this setting risk
of minimum norm least squares estimator vanishes in compare to risk of the null
estimator. We give asymptotic and non asymptotic upper bounds for this risk,
and also leverage the assumption of spike model to give an analysis of the bias
that leads to tighter bounds in compare to previous works.
Related papers
- Minimax Linear Regression under the Quantile Risk [31.277788690403522]
We study the problem of designing minimax procedures in linear regression under the quantile risk.
We prove a matching upper bound on the worst-case quantile risk of a variant of the recently proposed min-max regression procedure.
arXiv Detail & Related papers (2024-06-17T23:24:14Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Optimal Excess Risk Bounds for Empirical Risk Minimization on $p$-Norm Linear Regression [19.31269916674961]
We show that, in the realizable case, under no moment assumptions, $O(d)$ samples are enough to exactly recover the target.
We extend this result to the case $p in (1, 2)$ under mild assumptions that guarantee the existence of the Hessian of the risk at its minimizer.
arXiv Detail & Related papers (2023-10-19T03:21:28Z) - Risk Estimation in a Markov Cost Process: Lower and Upper Bounds [3.1484174280822845]
We tackle the problem of estimating risk measures of the infinite-horizon discounted cost within a Markov cost process.
The risk measures we study include variance, Value-at-Risk (VaR), and Conditional Value-at-Risk (CVaR)
arXiv Detail & Related papers (2023-10-17T16:35:39Z) - Estimating the minimizer and the minimum value of a regression function
under passive design [72.85024381807466]
We propose a new method for estimating the minimizer $boldsymbolx*$ and the minimum value $f*$ of a smooth and strongly convex regression function $f$.
We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $boldsymbolz_n$, and for the risk of estimating $f*$.
arXiv Detail & Related papers (2022-11-29T18:38:40Z) - Non asymptotic estimation lower bounds for LTI state space models with
Cram\'er-Rao and van Trees [1.14219428942199]
We study the estimation problem for linear time-invariant (LTI) state-space models with Gaussian excitation of an unknown covariance.
We provide non lower bounds for the expected estimation error and the mean square estimation risk of the least square estimator.
Our results extend and improve existing lower bounds to lower bounds in expectation of the mean square estimation risk.
arXiv Detail & Related papers (2021-09-17T15:00:25Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Minimax Off-Policy Evaluation for Multi-Armed Bandits [58.7013651350436]
We study the problem of off-policy evaluation in the multi-armed bandit model with bounded rewards.
We develop minimax rate-optimal procedures under three settings.
arXiv Detail & Related papers (2021-01-19T18:55:29Z) - Suboptimality of Constrained Least Squares and Improvements via
Non-Linear Predictors [3.5788754401889014]
We study the problem of predicting as well as the best linear predictor in a bounded Euclidean ball with respect to the squared loss.
We discuss additional distributional assumptions sufficient to guarantee an $O(d/n)$ excess risk rate for the least squares estimator.
arXiv Detail & Related papers (2020-09-19T21:39:46Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - Sharp Statistical Guarantees for Adversarially Robust Gaussian
Classification [54.22421582955454]
We provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification.
Results are stated in terms of the Adversarial Signal-to-Noise Ratio (AdvSNR), which generalizes a similar notion for standard linear classification to the adversarial setting.
arXiv Detail & Related papers (2020-06-29T21:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.