Randomized Smoothing of All Shapes and Sizes
- URL: http://arxiv.org/abs/2002.08118v5
- Date: Thu, 23 Jul 2020 21:20:51 GMT
- Title: Randomized Smoothing of All Shapes and Sizes
- Authors: Greg Yang, Tony Duan, J. Edward Hu, Hadi Salman, Ilya Razenshteyn,
Jerry Li
- Abstract summary: We show that for an appropriate notion of "optimal", the optimal smoothing for any "nice" norms have level sets given by the norm's *Wulff Crystal*
We show fundamental limits to current randomized smoothing techniques via the theory of *Banach space cotypes*.
- Score: 29.40896576138737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing is the current state-of-the-art defense with provable
robustness against $\ell_2$ adversarial attacks. Many works have devised new
randomized smoothing schemes for other metrics, such as $\ell_1$ or
$\ell_\infty$; however, substantial effort was needed to derive such new
guarantees. This begs the question: can we find a general theory for randomized
smoothing?
We propose a novel framework for devising and analyzing randomized smoothing
schemes, and validate its effectiveness in practice. Our theoretical
contributions are: (1) we show that for an appropriate notion of "optimal", the
optimal smoothing distributions for any "nice" norms have level sets given by
the norm's *Wulff Crystal*; (2) we propose two novel and complementary methods
for deriving provably robust radii for any smoothing distribution; and, (3) we
show fundamental limits to current randomized smoothing techniques via the
theory of *Banach space cotypes*. By combining (1) and (2), we significantly
improve the state-of-the-art certified accuracy in $\ell_1$ on standard
datasets. Meanwhile, we show using (3) that with only label statistics under
random input perturbations, randomized smoothing cannot achieve nontrivial
certified accuracy against perturbations of $\ell_p$-norm $\Omega(\min(1,
d^{\frac{1}{p} - \frac{1}{2}}))$, when the input dimension $d$ is large. We
provide code in github.com/tonyduan/rs4a.
Related papers
- SPLITZ: Certifiable Robustness via Split Lipschitz Randomized Smoothing [8.471466670802817]
There are two approaches to provide certifiable robustness to adversarial examples.
We propose textitSPLITZ, a practical and novel approach.
We show that textitSPLITZ consistently improves upon existing state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-03T05:13:28Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Near-Optimal Non-Convex Stochastic Optimization under Generalized
Smoothness [21.865728815935665]
Two recent works established the $O(epsilon-3)$ sample complexity to obtain an $O(epsilon)$-stationary point.
However, both require a large batch size on the order of $mathrmploy(epsilon-1)$, which is not only computationally burdensome but also unsuitable for streaming applications.
In this work, we solve the prior two problems simultaneously by revisiting a simple variant of the STORM algorithm.
arXiv Detail & Related papers (2023-02-13T00:22:28Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Smoothed Analysis with Adaptive Adversaries [28.940767013349408]
We prove novel algorithmic guarantees for several online problems in the smoothed analysis model.
In this model, at each time an adversary chooses an input distribution with density function bounded above by $tfrac1sigma$ times that of the uniform distribution.
Our results hold for em adaptive adversaries that can choose an input distribution based on the decisions of the algorithm and the realizations of the inputs in the previous time steps.
arXiv Detail & Related papers (2021-02-16T20:54:49Z) - Higher-Order Certification for Randomized Smoothing [78.00394805536317]
We propose a framework to improve the certified safety region for smoothed classifiers.
We provide a method to calculate the certified safety region using $0th$-order and $1st$-order information.
We also provide a framework that generalizes the calculation for certification using higher-order information.
arXiv Detail & Related papers (2020-10-13T19:35:48Z) - Provable Robust Classification via Learned Smoothed Densities [1.599072005190786]
We formulate the problem of robust classification in terms of $widehatx(Y)$, the $textitBayes estimator$ of $X$ given the noisy measurements.
We show that with a learned smoothed energy function and a linear classifier we can achieve provable $ell$ robust accuracies that are competitive with empirical defenses.
arXiv Detail & Related papers (2020-05-09T19:52:32Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.