Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness
- URL: http://arxiv.org/abs/2002.03239v2
- Date: Fri, 14 Aug 2020 05:02:35 GMT
- Title: Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness
- Authors: Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi
- Abstract summary: We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
- Score: 151.67113334248464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing, using just a simple isotropic Gaussian distribution,
has been shown to produce good robustness guarantees against $\ell_2$-norm
bounded adversaries. In this work, we show that extending the smoothing
technique to defend against other attack models can be challenging, especially
in the high-dimensional regime. In particular, for a vast class of
i.i.d.~smoothing distributions, we prove that the largest $\ell_p$-radius that
can be certified decreases as $O(1/d^{\frac{1}{2} - \frac{1}{p}})$ with
dimension $d$ for $p > 2$. Notably, for $p \geq 2$, this dependence on $d$ is
no better than that of the $\ell_p$-radius that can be certified using
isotropic Gaussian smoothing, essentially putting a matching lower bound on the
robustness radius. When restricted to {\it generalized} Gaussian smoothing,
these two bounds can be shown to be within a constant factor of each other in
an asymptotic sense, establishing that Gaussian smoothing provides the best
possible results, up to a constant factor, when $p \geq 2$. We present
experimental results on CIFAR to validate our theory. For other smoothing
distributions, such as, a uniform distribution within an $\ell_1$ or an
$\ell_\infty$-norm ball, we show upper bounds of the form $O(1 / d)$ and $O(1 /
d^{1 - \frac{1}{p}})$ respectively, which have an even worse dependence on $d$.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Stochastic Approximation Approaches to Group Distributionally Robust
Optimization [96.26317627118912]
Group distributionally robust optimization (GDRO)
Online learning techniques to reduce the number of samples required in each round from $m$ to $1$, keeping the same sample.
A novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - Sharper Utility Bounds for Differentially Private Models [20.246768861331276]
First $mathcalObig(fracsqrtpnepsilonbig)$ high probability excess population risk bound for differentially private algorithms.
New proposed algorithm m-NGP improves the performance of the differentially private model over real datasets.
arXiv Detail & Related papers (2022-04-22T07:03:13Z) - From Smooth Wasserstein Distance to Dual Sobolev Norm: Empirical
Approximation and Statistical Applications [18.618590805279187]
We show that $mathsfW_p(sigma)$ is controlled by a $pth order smooth dual Sobolev $mathsfd_p(sigma)$.
We derive the limit distribution of $sqrtnmathsfd_p(sigma)(hatmu_n,mu)$ in all dimensions $d$, when $mu$ is sub-Gaussian.
arXiv Detail & Related papers (2021-01-11T17:23:24Z) - The Sample Complexity of Robust Covariance Testing [56.98280399449707]
We are given i.i.d. samples from a distribution of the form $Z = (1-epsilon) X + epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $mathcalN(0, Sigma)$.
In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples.
We prove a sample complexity lower bound of $Omega(d2)$ for $epsilon$ an arbitrarily small constant and $gamma
arXiv Detail & Related papers (2020-12-31T18:24:41Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Learning Mixtures of Spherical Gaussians via Fourier Analysis [0.5381004207943596]
We find that a bound on the sample and computational complexity was previously unknown when $omega(1) leq d leq O(log k)$.
These authors also show that the sample of complexity of a random mixture of gaussians in a ball of radius $d$ in $d$ dimensions, when $d$ is $Theta(sqrtd)$ in $d$ dimensions, when $d$ is at least $poly(k, frac1delta)$.
arXiv Detail & Related papers (2020-04-13T08:06:29Z) - Randomized Smoothing of All Shapes and Sizes [29.40896576138737]
We show that for an appropriate notion of "optimal", the optimal smoothing for any "nice" norms have level sets given by the norm's *Wulff Crystal*
We show fundamental limits to current randomized smoothing techniques via the theory of *Banach space cotypes*.
arXiv Detail & Related papers (2020-02-19T11:41:09Z) - Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for
High-Dimensional Images [23.264535488112134]
We show a hardness result for random smoothing to achieve adversarial robustness against attacks in the $ell_p$ ball of radius $epsilon$ when $p>2$.
arXiv Detail & Related papers (2020-02-10T03:26:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.