Sharper Bounds for $\ell_p$ Sensitivity Sampling
- URL: http://arxiv.org/abs/2306.00732v2
- Date: Wed, 3 Jan 2024 15:47:40 GMT
- Title: Sharper Bounds for $\ell_p$ Sensitivity Sampling
- Authors: David P. Woodruff, Taisuke Yasuda
- Abstract summary: We show the first bounds for sensitivity sampling for $ell_p$ subspace embeddings for $p.
We also show that the root leverage score sampling algorithm achieves a bound of roughly $d$ for $1leq p2$, and that a combination of leverage score and sensitivity sampling achieves an improved bound of roughly $d2/pmathfrak S2-4/p$ for $2pinfty.
- Score: 56.45770306797584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In large scale machine learning, random sampling is a popular way to
approximate datasets by a small representative subset of examples. In
particular, sensitivity sampling is an intensely studied technique which
provides provable guarantees on the quality of approximation, while reducing
the number of examples to the product of the VC dimension $d$ and the total
sensitivity $\mathfrak S$ in remarkably general settings. However, guarantees
going beyond this general bound of $\mathfrak S d$ are known in perhaps only
one setting, for $\ell_2$ subspace embeddings, despite intense study of
sensitivity sampling in prior work. In this work, we show the first bounds for
sensitivity sampling for $\ell_p$ subspace embeddings for $p > 2$ that improve
over the general $\mathfrak S d$ bound, achieving a bound of roughly $\mathfrak
S^{2-2/p}$ for $2<p<\infty$. Furthermore, our techniques yield further new
results in the study of sampling algorithms, showing that the root leverage
score sampling algorithm achieves a bound of roughly $d$ for $1\leq p<2$, and
that a combination of leverage score and sensitivity sampling achieves an
improved bound of roughly $d^{2/p}\mathfrak S^{2-4/p}$ for $2<p<\infty$. Our
sensitivity sampling results yield the best known sample complexity for a wide
class of structured matrices that have small $\ell_p$ sensitivity.
Related papers
- Optimal Bound for PCA with Outliers using Higher-Degree Voronoi Diagrams [0.0]
We introduce new algorithms for Principal Component Analysis (PCA) with outliers.
We navigate to the optimal subspace for PCA even in the presence of outliers.
This approach achieves an optimal solution with a time complexity of $nd+mathcalO(1)textpoly(n,d)$.
arXiv Detail & Related papers (2024-08-13T13:05:36Z) - Optimal bounds for $\ell_p$ sensitivity sampling via $\ell_2$ augmentation [56.403488578703865]
We show that by augmenting the $ell_p$ sensitivities by $ell$ sensitivities, we obtain better bounds on optimal linear $tilde O(varepsilon-2mu2 d)$ sampling complexity.
We also obtain an $tilde O(varepsilon-2mu2 d)$ sensitivity sampling bound for logistic regression.
arXiv Detail & Related papers (2024-06-01T07:03:40Z) - Computing Approximate $\ell_p$ Sensitivities [46.95464524720463]
We provide efficient algorithms for approximating $ell_p$ sensitivities and summary statistics of a given matrix.
For a wide class of matrices in real-world datasets, the total sensitivity can be quickly approximated and is significantly smaller than the theoretical prediction.
arXiv Detail & Related papers (2023-11-07T17:34:56Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation [30.137884459159107]
We consider the question of learning $Q$-function in a sample efficient manner for reinforcement learning with continuous state and action spaces.
We develop a simple, iterative learning algorithm that finds $epsilon$-Schmidt $Q$-function with sample complexity of $widetildeO(frac1epsilonmax(d1), d_2)+2)$ when the optimal $Q$-function has low rank $r$ and the factor $$ is below a certain threshold.
arXiv Detail & Related papers (2020-06-11T00:55:35Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.