Reverse Euclidean and Gaussian isoperimetric inequalities for parallel
sets with applications
- URL: http://arxiv.org/abs/2006.09568v2
- Date: Fri, 21 Aug 2020 02:47:10 GMT
- Title: Reverse Euclidean and Gaussian isoperimetric inequalities for parallel
sets with applications
- Authors: Varun Jog
- Abstract summary: We show that the surface area of an $r$-parallel set in $mathbb Rd$ with volume at most $V$ is upper-bounded by $eTheta(d)V/r$, whereas its Gaussian surface area is upper-bounded by $max(eTheta(d), eTheta(d)/r)$.
We also derive a reverse form of the Brunn-Minkowski inequality for $r$-parallel sets, and as an aside a reverse entropy power inequality for Gaussian-smoothed random variables
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The $r$-parallel set of a measurable set $A \subseteq \mathbb R^d$ is the set
of all points whose distance from $A$ is at most $r$. In this paper, we show
that the surface area of an $r$-parallel set in $\mathbb R^d$ with volume at
most $V$ is upper-bounded by $e^{\Theta(d)}V/r$, whereas its Gaussian surface
area is upper-bounded by $\max(e^{\Theta(d)}, e^{\Theta(d)}/r)$. We also derive
a reverse form of the Brunn-Minkowski inequality for $r$-parallel sets, and as
an aside a reverse entropy power inequality for Gaussian-smoothed random
variables. We apply our results to two problems in theoretical machine
learning: (1) bounding the computational complexity of learning $r$-parallel
sets under a Gaussian distribution; and (2) bounding the sample complexity of
estimating robust risk, which is a notion of risk in the adversarial machine
learning literature that is analogous to the Bayes risk in hypothesis testing.
Related papers
- Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Toward $L_\infty$-recovery of Nonlinear Functions: A Polynomial Sample
Complexity Bound for Gaussian Random Fields [35.76184529520015]
Many machine learning applications require learning a function with a small worst-case error over the entire input domain, that is, the $L_infty$-error.
Most existing theoretical works only guarantee recovery in average errors such as the $L$-error.
This paper makes some initial steps beyond the impossibility results by leveraging the randomness in the ground-truth functions.
arXiv Detail & Related papers (2023-04-29T18:33:39Z) - Overparametrized linear dimensionality reductions: From projection
pursuit to two-layer neural networks [10.368585938419619]
Given a cloud of $n$ data points in $mathbbRd$, consider all projections onto $m$-dimensional subspaces of $mathbbRd$.
What does this collection of probability distributions look like when $n,d$ grow large?
Denoting by $mathscrF_m, alpha$ the set of probability distributions in $mathbbRm$ that arise as low-dimensional projections in this limit, we establish new inner and outer bounds on $mathscrF_
arXiv Detail & Related papers (2022-06-14T00:07:33Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Optimal Spectral Recovery of a Planted Vector in a Subspace [80.02218763267992]
We study efficient estimation and detection of a planted vector $v$ whose $ell_4$ norm differs from that of a Gaussian vector with the same $ell$ norm.
We show that in the regime $n rho gg sqrtN$, any spectral method from a large class (and more generally, any low-degree of the input) fails to detect the planted vector.
arXiv Detail & Related papers (2021-05-31T16:10:49Z) - SDP Achieves Exact Minimax Optimality in Phase Synchronization [19.909352968029584]
We study the phase synchronization problem with noisy measurements $Y=z*z*+sigma WinmathbbCntimes ntimes n.
We prove that SDP achieves error bound $ (1+o)fracnp22np$ under a squared $ell$ loss.
arXiv Detail & Related papers (2021-01-07T03:14:05Z) - The Sample Complexity of Robust Covariance Testing [56.98280399449707]
We are given i.i.d. samples from a distribution of the form $Z = (1-epsilon) X + epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $mathcalN(0, Sigma)$.
In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples.
We prove a sample complexity lower bound of $Omega(d2)$ for $epsilon$ an arbitrarily small constant and $gamma
arXiv Detail & Related papers (2020-12-31T18:24:41Z) - Multivariate mean estimation with direction-dependent accuracy [8.147652597876862]
We consider the problem of estimating the mean of a random vector based on $N$ independent, identically distributed observations.
We prove an estimator that has a near-optimal error in all directions in which the variance of the one dimensional marginal of the random vector is not too small.
arXiv Detail & Related papers (2020-10-22T17:52:45Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.