On Suboptimality of Least Squares with Application to Estimation of
Convex Bodies
- URL: http://arxiv.org/abs/2006.04046v1
- Date: Sun, 7 Jun 2020 05:19:00 GMT
- Title: On Suboptimality of Least Squares with Application to Estimation of
Convex Bodies
- Authors: Gil Kur, Alexander Rakhlin and Adityanand Guntuboyina
- Abstract summary: We settle an open problem regarding optimality of Least Squares in estimating a convex set from noisy support function measurements in dimension $dgeq 6$.
We establish that Least Squares is sub-optimal, and achieves a rate of $tildeTheta_d(n-2/(d-1))$ whereas the minimax rate is $Theta_d(n-4/(d+3))$.
- Score: 74.39616164169131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a technique for establishing lower bounds on the sample complexity
of Least Squares (or, Empirical Risk Minimization) for large classes of
functions. As an application, we settle an open problem regarding optimality of
Least Squares in estimating a convex set from noisy support function
measurements in dimension $d\geq 6$. Specifically, we establish that Least
Squares is mimimax sub-optimal, and achieves a rate of
$\tilde{\Theta}_d(n^{-2/(d-1)})$ whereas the minimax rate is
$\Theta_d(n^{-4/(d+3)})$.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Estimating the minimizer and the minimum value of a regression function
under passive design [72.85024381807466]
We propose a new method for estimating the minimizer $boldsymbolx*$ and the minimum value $f*$ of a smooth and strongly convex regression function $f$.
We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $boldsymbolz_n$, and for the risk of estimating $f*$.
arXiv Detail & Related papers (2022-11-29T18:38:40Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Localization in 1D non-parametric latent space models from pairwise
affinities [6.982738885923206]
We consider the problem of estimating latent positions in a one-dimensional torus from pairwise affinities.
We introduce an estimation procedure that provably localizes all the latent positions with a maximum error of the order of $sqrtlog(n)/n$, with high-probability.
arXiv Detail & Related papers (2021-08-06T13:05:30Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - Minimal Constraints in the Parity Formulation of Optimization Problems [0.0]
We introduce a method to find the minimal strength of the constraints which are required to conserve the correct ground-state.
We find that depending on the problem class, the exponent ranges from linear $alpha propto 1$ to quadratic $alpha propto 2$ scaling with the number of logical qubits.
arXiv Detail & Related papers (2020-08-24T14:04:33Z) - Convex Regression in Multidimensions: Suboptimality of Least Squares Estimators [4.758195657080579]
Least Squares Estimators are shown to be suboptimal for estimating a $d$-dimensional convex function in squared error loss when the dimension $d$ is 5 or larger.
The specific function classes considered include: (i) bounded convex functions supported on a polytope (in random design), (ii) Lipschitz convex functions supported on any convex domain (in random design), and (iii) convex functions supported on a polytope (in fixed design)
arXiv Detail & Related papers (2020-06-03T04:57:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.