Estimating the minimizer and the minimum value of a regression function
under passive design
- URL: http://arxiv.org/abs/2211.16457v1
- Date: Tue, 29 Nov 2022 18:38:40 GMT
- Title: Estimating the minimizer and the minimum value of a regression function
under passive design
- Authors: Arya Akhavan, Davit Gogolashvili, Alexandre B. Tsybakov
- Abstract summary: We propose a new method for estimating the minimizer $boldsymbolx*$ and the minimum value $f*$ of a smooth and strongly convex regression function $f$.
We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $boldsymbolz_n$, and for the risk of estimating $f*$.
- Score: 72.85024381807466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new method for estimating the minimizer $\boldsymbol{x}^*$ and
the minimum value $f^*$ of a smooth and strongly convex regression function $f$
from the observations contaminated by random noise. Our estimator
$\boldsymbol{z}_n$ of the minimizer $\boldsymbol{x}^*$ is based on a version of
the projected gradient descent with the gradient estimated by a regularized
local polynomial algorithm. Next, we propose a two-stage procedure for
estimation of the minimum value $f^*$ of regression function $f$. At the first
stage, we construct an accurate enough estimator of $\boldsymbol{x}^*$, which
can be, for example, $\boldsymbol{z}_n$. At the second stage, we estimate the
function value at the point obtained in the first stage using a rate optimal
nonparametric procedure. We derive non-asymptotic upper bounds for the
quadratic risk and optimization error of $\boldsymbol{z}_n$, and for the risk
of estimating $f^*$. We establish minimax lower bounds showing that, under
certain choice of parameters, the proposed algorithms achieve the minimax
optimal rates of convergence on the class of smooth and strongly convex
functions.
Related papers
- Active Subsampling for Measurement-Constrained M-Estimation of Individualized Thresholds with High-Dimensional Data [3.1138411427556445]
In the measurement-constrained problems, despite the availability of large datasets, we may be only affordable to observe the labels on a small portion of the large dataset.
This poses a critical question that which data points are most beneficial to label given a budget constraint.
In this paper, we focus on the estimation of the optimal individualized threshold in a measurement-constrained M-estimation framework.
arXiv Detail & Related papers (2024-11-21T00:21:17Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Bayes beats Cross Validation: Efficient and Accurate Ridge Regression
via Expectation Maximization [3.061662434597098]
We present a method for the regularization hyper- parameter, $lambda$, that is faster to compute than leave-one-out cross-validation (LOOCV)
We show that the proposed method is guaranteed to find a unique optimal solution for large enough $n$, under relatively mild conditions.
arXiv Detail & Related papers (2023-10-29T01:13:55Z) - Robust Nonparametric Regression under Poisoning Attack [13.470899588917716]
An adversarial attacker can modify the values of up to $q$ samples from a training dataset of size $N$.
Our initial solution is an M-estimator based on Huber loss minimization.
The final estimate is nearly minimax optimal for arbitrary $q$, up to a $ln N$ factor.
arXiv Detail & Related papers (2023-05-26T09:33:17Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - A first-order primal-dual method with adaptivity to local smoothness [64.62056765216386]
We consider the problem of finding a saddle point for the convex-concave objective $min_x max_y f(x) + langle Ax, yrangle - g*(y)$, where $f$ is a convex function with locally Lipschitz gradient and $g$ is convex and possibly non-smooth.
We propose an adaptive version of the Condat-Vu algorithm, which alternates between primal gradient steps and dual steps.
arXiv Detail & Related papers (2021-10-28T14:19:30Z) - Faster Perturbed Stochastic Gradient Methods for Finding Local Minima [92.99933928528797]
We propose tttPullback, a faster perturbed gradient framework for finding local minima.
We show that Pullback with gradient estimators such as SARAH/SP and STORM can find $(epsilon, epsilon_H)$approximate local minima within $tilde O(epsilon-3 + H-6)$.
The core idea of our framework is a step-size pullback'' scheme to control the average movement of the gradient evaluations.
arXiv Detail & Related papers (2021-10-25T07:20:05Z) - Stochastic Bias-Reduced Gradient Methods [44.35885731095432]
We develop a new primitive for optimization: a low-bias, low-cost smoothing of ther $x_star$ of any bound of the Moreau-Yoshida function.
arXiv Detail & Related papers (2021-06-17T13:33:05Z) - Saddle Point Optimization with Approximate Minimization Oracle [8.680676599607125]
A major approach to saddle point optimization $min_xmax_y f(x, y)$ is a gradient based approach as is popularized by generative adversarial networks (GANs)
In contrast, we analyze an alternative approach relying only on an oracle that solves a minimization problem approximately.
Our approach locates approximate solutions $x'$ and $y'$ to $min_x'f(x', y)$ at a given point $(x, y)$ and updates $(x, y)$ toward these approximate solutions $(x', y'
arXiv Detail & Related papers (2021-03-29T23:03:24Z) - On Suboptimality of Least Squares with Application to Estimation of
Convex Bodies [74.39616164169131]
We settle an open problem regarding optimality of Least Squares in estimating a convex set from noisy support function measurements in dimension $dgeq 6$.
We establish that Least Squares is sub-optimal, and achieves a rate of $tildeTheta_d(n-2/(d-1))$ whereas the minimax rate is $Theta_d(n-4/(d+3))$.
arXiv Detail & Related papers (2020-06-07T05:19:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.