Optimal Sketching Bounds for Sparse Linear Regression
- URL: http://arxiv.org/abs/2304.02261v1
- Date: Wed, 5 Apr 2023 07:24:19 GMT
- Title: Optimal Sketching Bounds for Sparse Linear Regression
- Authors: Tung Mai, Alexander Munteanu, Cameron Musco, Anup B. Rao, Chris
Schwiegelshohn, David P. Woodruff
- Abstract summary: We study oblivious sketching for $k$-sparse linear regression under various loss functions such as an $ell_p$ norm, or from a broad class of hinge-like loss functions.
We show that for sparse $ell$vareps regression, there is an oblivious distribution over sketches with $Theta(klog(d/k)/varepsilon2)$ rows, which is tight up to a constant factor.
We also show that sketching $O(mu2 klog(mu n d/varepsilon)/var
- Score: 116.30196615349226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study oblivious sketching for $k$-sparse linear regression under various
loss functions such as an $\ell_p$ norm, or from a broad class of hinge-like
loss functions, which includes the logistic and ReLU losses. We show that for
sparse $\ell_2$ norm regression, there is a distribution over oblivious
sketches with $\Theta(k\log(d/k)/\varepsilon^2)$ rows, which is tight up to a
constant factor. This extends to $\ell_p$ loss with an additional additive
$O(k\log(k/\varepsilon)/\varepsilon^2)$ term in the upper bound. This
establishes a surprising separation from the related sparse recovery problem,
which is an important special case of sparse regression. For this problem,
under the $\ell_2$ norm, we observe an upper bound of $O(k \log (d)/\varepsilon
+ k\log(k/\varepsilon)/\varepsilon^2)$ rows, showing that sparse recovery is
strictly easier to sketch than sparse regression. For sparse regression under
hinge-like loss functions including sparse logistic and sparse ReLU regression,
we give the first known sketching bounds that achieve $o(d)$ rows showing that
$O(\mu^2 k\log(\mu n d/\varepsilon)/\varepsilon^2)$ rows suffice, where $\mu$
is a natural complexity parameter needed to obtain relative error bounds for
these loss functions. We again show that this dimension is tight, up to lower
order terms and the dependence on $\mu$. Finally, we show that similar
sketching bounds can be achieved for LASSO regression, a popular convex
relaxation of sparse regression, where one aims to minimize
$\|Ax-b\|_2^2+\lambda\|x\|_1$ over $x\in\mathbb{R}^d$. We show that sketching
dimension $O(\log(d)/(\lambda \varepsilon)^2)$ suffices and that the dependence
on $d$ and $\lambda$ is tight.
Related papers
- Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Almost Linear Constant-Factor Sketching for $\ell_1$ and Logistic
Regression [74.28017932704704]
We improve upon previous oblivious sketching and turnstile streaming results for $ell_1$ and logistic regression.
We also give a tradeoff that yields a $1+varepsilon$ approximation in input sparsity time.
Our sketch can be extended to approximate a regularized version of logistic regression where the data-dependent regularizer corresponds to the variance of the individual logistic losses.
arXiv Detail & Related papers (2023-03-31T18:12:33Z) - A Nearly-Optimal Bound for Fast Regression with $\ell_\infty$ Guarantee [16.409210914237086]
Given a matrix $Ain mathbbRntimes d$ and a tensor $bin mathbbRn$, we consider the regression problem with $ell_infty$ guarantees.
We show that in order to obtain such $ell_infty$ guarantee for $ell$ regression, one has to use sketching matrices that are dense.
We also develop a novel analytical framework for $ell_infty$ guarantee regression that utilizes the Oblivious Coordinate-wise Embedding (OCE) property
arXiv Detail & Related papers (2023-02-01T05:22:40Z) - Active Sampling for Linear Regression Beyond the $\ell_2$ Norm [70.49273459706546]
We study active sampling algorithms for linear regression, which aim to query only a small number of entries of a target vector.
We show that this dependence on $d$ is optimal, up to logarithmic factors.
We also provide the first total sensitivity upper bound $O(dmax1,p/2log2 n)$ for loss functions with at most degree $p$ growth.
arXiv Detail & Related papers (2021-11-09T00:20:01Z) - L1 Regression with Lewis Weights Subsampling [10.796388585158184]
We show that sampling from $X$ according to its Lewis weights and outputting the empirical minimizer succeeds with probability $1-delta$ for $m.
We also give a corresponding lower bound of $Omega(fracdvarepsilon2 + (d + frac1varepsilon2) logfrac1delta)$.
arXiv Detail & Related papers (2021-05-19T23:15:00Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.