Efficient Minimax Optimal Estimators For Multivariate Convex Regression
- URL: http://arxiv.org/abs/2205.03368v1
- Date: Fri, 6 May 2022 17:04:05 GMT
- Title: Efficient Minimax Optimal Estimators For Multivariate Convex Regression
- Authors: Gil Kur and Eli Putterman
- Abstract summary: We present the first computationally efficient minimax optimal (up to logarithmic factors) estimators for the tasks of (i) $L$-Lipschitz convex regression (ii) $Gamma$-bounded convex regression undertopal support.
This work is the first to show the existence of efficient minimax optimal estimators for non-Donsker classes that their corresponding Least Squares Estimators are provably minimax sub-optimal.
- Score: 1.583842747998493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the computational aspects of the task of multivariate convex
regression in dimension $d \geq 5$. We present the first computationally
efficient minimax optimal (up to logarithmic factors) estimators for the tasks
of (i) $L$-Lipschitz convex regression (ii) $\Gamma$-bounded convex regression
under polytopal support. The proof of the correctness of these estimators uses
a variety of tools from different disciplines, among them empirical process
theory, stochastic geometry, and potential theory. This work is the first to
show the existence of efficient minimax optimal estimators for non-Donsker
classes that their corresponding Least Squares Estimators are provably minimax
sub-optimal; a result of independent interest.
Related papers
- Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision
Processes [80.89852729380425]
We propose the first computationally efficient algorithm that achieves the nearly minimax optimal regret $tilde O(dsqrtH3K)$.
Our work provides a complete answer to optimal RL with linear MDPs, and the developed algorithm and theoretical tools may be of independent interest.
arXiv Detail & Related papers (2022-12-12T18:58:59Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Piecewise Linear Regression via a Difference of Convex Functions [50.89452535187813]
We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.
We empirically validate the method, showing it to be practically implementable, and to have comparable performance to existing regression/classification methods on real-world datasets.
arXiv Detail & Related papers (2020-07-05T18:58:47Z) - Nearest Neighbour Based Estimates of Gradients: Sharp Nonasymptotic
Bounds and Applications [0.6445605125467573]
gradient estimation is of crucial importance in statistics and learning theory.
We consider here the classic regression setup, where a real valued square integrable r.v. $Y$ is to be predicted.
We prove nonasymptotic bounds improving upon those obtained for alternative estimation methods.
arXiv Detail & Related papers (2020-06-26T15:19:43Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Low-Rank Matrix Estimation From Rank-One Projections by Unlifted Convex
Optimization [9.492903649862761]
We study an estimator with a formulation convex for recovery of low-rank matrices from rank-one projections.
We show that under both models the estimator succeeds, with high probability, if the number of measurements exceeds $r2 (d+d_$2) up.
arXiv Detail & Related papers (2020-04-06T14:57:54Z) - Efficient algorithms for multivariate shape-constrained convex
regression problems [9.281671380673306]
We prove that the least squares estimator is computable via solving a constrained convex programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints.
For solving the generally very large-scale convex QP, we design two efficient algorithms, one is the symmetric Gauss-Seidel based alternating direction method of multipliers (tt sGS-ADMM), and the other is the proximal augmented Lagrangian method (tt pALM) with the subproblems solved by the semismooth Newton method (t
arXiv Detail & Related papers (2020-02-26T11:18:43Z) - Communication-Efficient Distributed Estimator for Generalized Linear
Models with a Diverging Number of Covariates [7.427903819459701]
A novel method is proposed to obtain anally efficient estimator for large-scale distributed data by two rounds of communication.
In this novel method, the assumption on the number of servers is more relaxed and thus practical for real-world applications.
arXiv Detail & Related papers (2020-01-17T08:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.