Tight Second-Order Certificates for Randomized Smoothing
- URL: http://arxiv.org/abs/2010.10549v2
- Date: Tue, 15 Dec 2020 03:17:36 GMT
- Title: Tight Second-Order Certificates for Randomized Smoothing
- Authors: Alexander Levine, Aounon Kumar, Thomas Goldstein, and Soheil Feizi
- Abstract summary: We show that there also exists a universal curvature-like bound for Gaussian random smoothing.
In addition to proving the correctness of this novel certificate, we show that SoS certificates are realizable and therefore tight.
- Score: 106.06908242424481
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing is a popular way of providing robustness guarantees
against adversarial attacks: randomly-smoothed functions have a universal
Lipschitz-like bound, allowing for robustness certificates to be easily
computed. In this work, we show that there also exists a universal
curvature-like bound for Gaussian random smoothing: given the exact value and
gradient of a smoothed function, we compute a lower bound on the distance of a
point to its closest adversarial example, called the Second-order Smoothing
(SoS) robustness certificate. In addition to proving the correctness of this
novel certificate, we show that SoS certificates are realizable and therefore
tight. Interestingly, we show that the maximum achievable benefits, in terms of
certified robustness, from using the additional information of the gradient
norm are relatively small: because our bounds are tight, this is a fundamental
negative result. The gain of SoS certificates further diminishes if we consider
the estimation error of the gradient norms, for which we have developed an
estimator. We therefore additionally develop a variant of Gaussian smoothing,
called Gaussian dipole smoothing, which provides similar bounds to randomized
smoothing with gradient information, but with much-improved sample efficiency.
This allows us to achieve (marginally) improved robustness certificates on
high-dimensional datasets such as CIFAR-10 and ImageNet. Code is available at
https://github.com/alevine0/smoothing_second_order.
Related papers
- Robust Stochastic Optimization via Gradient Quantile Clipping [6.2844649973308835]
We introduce a quant clipping strategy for Gradient Descent (SGD)
We use gradient new outliers as norm clipping chains.
We propose an implementation of the algorithm using Huberiles.
arXiv Detail & Related papers (2023-09-29T15:24:48Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Revisiting Gradient Clipping: Stochastic bias and tight convergence
guarantees [37.40957596986653]
We give convergence guarantees that show precise dependence on arbitrary clipping thresholds $c$.
In particular, we show that for deterministic gradient descent, the clipping threshold only affects the higher-order terms of convergence.
We give matching upper and lower bounds for convergence of the gradient norm when running clipped SGD.
arXiv Detail & Related papers (2023-05-02T16:42:23Z) - Towards More Robust Interpretation via Local Gradient Alignment [37.464250451280336]
We show that for every non-negative homogeneous neural network, a naive $ell$-robust criterion for gradients is textitnot normalization invariant.
We propose to combine both $ell$ and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient.
We experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100.
arXiv Detail & Related papers (2022-11-29T03:38:28Z) - Smooth-Reduce: Leveraging Patches for Improved Certified Robustness [100.28947222215463]
We propose a training-free, modified smoothing approach, Smooth-Reduce.
Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input.
We provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods.
arXiv Detail & Related papers (2022-05-12T15:26:20Z) - High-probability Bounds for Non-Convex Stochastic Optimization with
Heavy Tails [55.561406656549686]
We consider non- Hilbert optimization using first-order algorithms for which the gradient estimates may have tails.
We show that a combination of gradient, momentum, and normalized gradient descent convergence to critical points in high-probability with best-known iteration for smooth losses.
arXiv Detail & Related papers (2021-06-28T00:17:01Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Extensions and limitations of randomized smoothing for robustness
guarantees [13.37805637358556]
We study how the choice of divergence between smoothing measures affects the final robustness guarantee.
We develop a method to certify robustness against any $ell_p$ ($pinmathbbN_>0$) minimized adversarial perturbation.
arXiv Detail & Related papers (2020-06-07T17:22:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.