Accelerated Smoothing: A Scalable Approach to Randomized Smoothing
- URL: http://arxiv.org/abs/2402.07498v1
- Date: Mon, 12 Feb 2024 09:07:54 GMT
- Title: Accelerated Smoothing: A Scalable Approach to Randomized Smoothing
- Authors: Devansh Bhardwaj, Kshitiz Kaushik, Sarthak Gupta
- Abstract summary: We propose a novel approach that replaces Monte Carlo sampling with the training of a surrogate neural network.
We show that our approach significantly accelerates the robust radius certification process, providing nearly $600$X improvement in time.
- Score: 4.530339602471495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized smoothing has emerged as a potent certifiable defense against
adversarial attacks by employing smoothing noises from specific distributions
to ensure the robustness of a smoothed classifier. However, the utilization of
Monte Carlo sampling in this process introduces a compute-intensive element,
which constrains the practicality of randomized smoothing on a larger scale. To
address this limitation, we propose a novel approach that replaces Monte Carlo
sampling with the training of a surrogate neural network. Through extensive
experimentation in various settings, we demonstrate the efficacy of our
approach in approximating the smoothed classifier with remarkable precision.
Furthermore, we demonstrate that our approach significantly accelerates the
robust radius certification process, providing nearly $600$X improvement in
computation time, overcoming the computational bottlenecks associated with
traditional randomized smoothing.
Related papers
- Variational Randomized Smoothing for Sample-Wise Adversarial Robustness [12.455543308060196]
This paper proposes a new variational framework that uses a per-sample noise level suitable for each input by introducing a noise level selector.
Our experimental results demonstrate enhancement of empirical robustness against adversarial attacks.
arXiv Detail & Related papers (2024-07-16T15:25:13Z) - Promoting Robustness of Randomized Smoothing: Two Cost-Effective
Approaches [28.87505826018613]
We propose two cost-effective approaches to boost robustness of randomized smoothing while preserving its clean performance.
The first approach introduces a new robust training method AdvMacer which combines adversarial training and certification for randomized smoothing.
The second approach introduces a post-processing method EsbRS which greatly improves the robustness certificate based on building model ensembles.
arXiv Detail & Related papers (2023-10-11T18:06:05Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Minibatch and Momentum Model-based Methods for Stochastic Non-smooth
Non-convex Optimization [3.4809730725241597]
We make two important extensions to model-based methods.
First, we propose a new minibatch which takes a set of samples to approximate the model function in each iteration.
Second, by the success of momentum techniques we propose a new convex-based model.
arXiv Detail & Related papers (2021-06-06T05:31:57Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z) - Distributed Sketching Methods for Privacy Preserving Regression [54.51566432934556]
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
We derive novel approximation guarantees for classical sketching methods and analyze the accuracy of parameter averaging for distributed sketches.
We illustrate the performance of distributed sketches in a serverless computing platform with large scale experiments.
arXiv Detail & Related papers (2020-02-16T08:35:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.