Optimal Coreset for Gaussian Kernel Density Estimation
- URL: http://arxiv.org/abs/2007.08031v5
- Date: Mon, 21 Feb 2022 03:45:57 GMT
- Title: Optimal Coreset for Gaussian Kernel Density Estimation
- Authors: Wai Ming Tai
- Abstract summary: Given a point set $Psubset mathbbRd$, the kernel density estimate of $P$ is defined as [ overlinemathcalG_P(x) = frac1left|Pright|sum_pin Pe-leftlVert x-p rightrVert2 ] for any $xinmathbbRd$.
We study how to construct a small subset $Q$ of $P
- Score: 0.8376091455761259
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a point set $P\subset \mathbb{R}^d$, the kernel density estimate of $P$
is defined as \[ \overline{\mathcal{G}}_P(x) =
\frac{1}{\left|P\right|}\sum_{p\in P}e^{-\left\lVert x-p \right\rVert^2} \] for
any $x\in\mathbb{R}^d$. We study how to construct a small subset $Q$ of $P$
such that the kernel density estimate of $P$ is approximated by the kernel
density estimate of $Q$. This subset $Q$ is called a coreset. The main
technique in this work is constructing a $\pm 1$ coloring on the point set $P$
by discrepancy theory and we leverage Banaszczyk's Theorem. When $d>1$ is a
constant, our construction gives a coreset of size
$O\left(\frac{1}{\varepsilon}\right)$ as opposed to the best-known result of
$O\left(\frac{1}{\varepsilon}\sqrt{\log\frac{1}{\varepsilon}}\right)$. It is
the first result to give a breakthrough on the barrier of $\sqrt{\log}$ factor
even when $d=2$.
Related papers
- Nearly Linear Sparsification of $\ell_p$ Subspace Approximation [47.790126028106734]
A popular approach to cope with the NP-hardness of the $ell_p$ subspace approximation problem is to compute a strong coreset.
We obtain the first algorithm for constructing a strong coreset for $ell_p$ subspace approximation with a nearly optimal dependence on the rank parameter $k$.
Our techniques also lead to the first nearly optimal online strong coresets for $ell_p$ subspace approximation with similar bounds as the offline setting.
arXiv Detail & Related papers (2024-07-03T16:49:28Z) - A note on estimating the dimension from a random geometric graph [2.3020018305241337]
We study the problem of estimating the dimension $d$ of the underlying space when we have access to the adjacency matrix of the graph.
We also show that, without any condition on the density, a consistent estimator of $d$ exists when $n r_nd to infty$ and $r_n = o(1)$.
arXiv Detail & Related papers (2023-11-21T23:46:44Z) - $\ell_p$-Regression in the Arbitrary Partition Model of Communication [59.89387020011663]
We consider the randomized communication complexity of the distributed $ell_p$-regression problem in the coordinator model.
For $p = 2$, i.e., least squares regression, we give the first optimal bound of $tildeTheta(sd2 + sd/epsilon)$ bits.
For $p in (1,2)$,we obtain an $tildeO(sd2/epsilon + sd/mathrmpoly(epsilon)$ upper bound.
arXiv Detail & Related papers (2023-07-11T08:51:53Z) - For Kernel Range Spaces a Constant Number of Queries Are Sufficient [13.200502573462712]
A kernel range space concerns a set of points $X subset mathbbRd$ and the space of all queries by a fixed kernel.
Anvarepsilon$-cover is a subset of points $Q subset mathbbRd$ for any $p in mathbbRd$ that $frac1n |R_p - R_q|leq varepsilon$ for some $q in Q$.
arXiv Detail & Related papers (2023-06-28T19:19:33Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - On the Self-Penalization Phenomenon in Feature Selection [69.16452769334367]
We describe an implicit sparsity-inducing mechanism based on over a family of kernels.
As an application, we use this sparsity-inducing mechanism to build algorithms consistent for feature selection.
arXiv Detail & Related papers (2021-10-12T09:36:41Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Kernel Thinning [26.25415159542831]
kernel thinning is a new procedure for compressing a distribution $mathbbP$ more effectively than i.i.d. sampling or standard thinning.
We derive explicit non-asymptotic maximum mean discrepancy bounds for Gaussian, Mat'ern, and B-spline kernels.
arXiv Detail & Related papers (2021-05-12T17:56:42Z) - Optimal Mean Estimation without a Variance [103.26777953032537]
We study the problem of heavy-tailed mean estimation in settings where the variance of the data-generating distribution does not exist.
We design an estimator which attains the smallest possible confidence interval as a function of $n,d,delta$.
arXiv Detail & Related papers (2020-11-24T22:39:21Z) - Convergence of Graph Laplacian with kNN Self-tuned Kernels [14.645468999921961]
Self-tuned kernel adaptively sets a $sigma_i$ at each point $x_i$ by the $k$-nearest neighbor (kNN) distance.
This paper proves the convergence of graph Laplacian operator $L_N$ to manifold (weighted-)Laplacian for a new family of kNN self-tuned kernels.
arXiv Detail & Related papers (2020-11-03T04:55:33Z) - Sets Clustering [25.358415142404752]
We prove that a core-set of $O(logn)$ sets always exists, and can be computed in $O(nlogn)$ time.
Applying an inefficient but optimal algorithm on this coreset allows us to obtain the first PTAS ($1+varepsilon$ approximation) for the sets-$k$-means problem.
Open source code and experimental results for document classification and facility locations are also provided.
arXiv Detail & Related papers (2020-03-09T13:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.