Certified Defense via Latent Space Randomized Smoothing with Orthogonal
Encoders
- URL: http://arxiv.org/abs/2108.00491v1
- Date: Sun, 1 Aug 2021 16:48:43 GMT
- Title: Certified Defense via Latent Space Randomized Smoothing with Orthogonal
Encoders
- Authors: Huimin Zeng, Jiahao Su, Furong Huang
- Abstract summary: We investigate the possibility of performing randomized smoothing and establishing the robust certification in the latent space of a network.
We use modules, whose Lipschitz property is known for free by design, to propagate the certified radius estimated in the latent space back to the input space.
Experiments on CIFAR10 and ImageNet show that our method achieves competitive robustness certified but with a significant improvement of efficiency during the test phase.
- Score: 13.723000245697866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized Smoothing (RS), being one of few provable defenses, has been
showing great effectiveness and scalability in terms of defending against
$\ell_2$-norm adversarial perturbations. However, the cost of MC sampling
needed in RS for evaluation is high and computationally expensive. To address
this issue, we investigate the possibility of performing randomized smoothing
and establishing the robust certification in the latent space of a network, so
that the overall dimensionality of tensors involved in computation could be
drastically reduced. To this end, we propose Latent Space Randomized Smoothing.
Another important aspect is that we use orthogonal modules, whose Lipschitz
property is known for free by design, to propagate the certified radius
estimated in the latent space back to the input space, providing valid
certifiable regions for the test samples in the input space. Experiments on
CIFAR10 and ImageNet show that our method achieves competitive certified
robustness but with a significant improvement of efficiency during the test
phase.
Related papers
- Certified Robustness for Deep Equilibrium Models via Serialized Random Smoothing [12.513566361816684]
Implicit models such as Deep Equilibrium Models (DEQs) have emerged as promising alternative approaches for building deep neural networks.
Existing certified defenses for DEQs employing deterministic certification methods can not certify on large-scale datasets.
We provide the first randomized smoothing certified defense for DEQs to solve these limitations.
arXiv Detail & Related papers (2024-11-01T06:14:11Z) - Estimating the Robustness Radius for Randomized Smoothing with 100$\times$ Sample Efficiency [6.199300239433395]
This work demonstrates that reducing the number of samples by one or two orders of magnitude can still enable the computation of a slightly smaller robustness radius.
We provide the mathematical foundation for explaining the phenomenon while experimentally showing promising results on the standard CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-04-26T12:43:19Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Systematic Investigation of Sparse Perturbed Sharpness-Aware
Minimization Optimizer [158.2634766682187]
Deep neural networks often suffer from poor generalization due to complex and non- unstructured loss landscapes.
SharpnessAware Minimization (SAM) is a popular solution that smooths the loss by minimizing the change of landscape when adding a perturbation.
In this paper, we propose Sparse SAM (SSAM), an efficient and effective training scheme that achieves perturbation by a binary mask.
arXiv Detail & Related papers (2023-06-30T09:33:41Z) - Adversarial robustness of VAEs through the lens of local geometry [1.2228014485474623]
In an unsupervised attack on variational autoencoders (VAEs), an adversary finds a small perturbation in an input sample that significantly changes its latent space encoding.
This paper demonstrates that an optimal way for an adversary to attack VAEs is to exploit a directional bias of a pullback metric tensor.
arXiv Detail & Related papers (2022-08-08T05:53:57Z) - Double Sampling Randomized Smoothing [19.85592163703077]
We propose a Double Sampling Randomized Smoothing framework.
It exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier.
We show that DSRS certifies larger robust radii than existing datasets consistently under different settings.
arXiv Detail & Related papers (2022-06-16T04:34:28Z) - Input-Specific Robustness Certification for Randomized Smoothing [76.76115360719837]
We propose Input-Specific Sampling (ISS) acceleration to achieve the cost-effectiveness for robustness certification.
ISS can speed up the certification by more than three times at a limited cost of 0.05 certified radius.
arXiv Detail & Related papers (2021-12-21T12:16:03Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Tight Second-Order Certificates for Randomized Smoothing [106.06908242424481]
We show that there also exists a universal curvature-like bound for Gaussian random smoothing.
In addition to proving the correctness of this novel certificate, we show that SoS certificates are realizable and therefore tight.
arXiv Detail & Related papers (2020-10-20T18:03:45Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.