Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences
- URL: http://arxiv.org/abs/2406.10427v3
- Date: Thu, 10 Jul 2025 07:08:38 GMT
- Title: Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences
- Authors: Saiyue Lyu, Shadab Shaikh, Frederick Shpilevskiy, Evan Shelhamer, Mathias Lécuyer,
- Abstract summary: We propose Adaptive Randomized Smoothing (ARS) to certify the predictions of our test-time adaptive models against adversarial examples.<n>ARS extends the analysis of randomized smoothing using $f$-Differential Privacy to certify the adaptive composition of multiple steps.<n>We instantiate ARS on deep image classification to certify predictions against adversarial examples of bounded $L_infty$ norm.
- Score: 8.40389580910855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose Adaptive Randomized Smoothing (ARS) to certify the predictions of our test-time adaptive models against adversarial examples. ARS extends the analysis of randomized smoothing using $f$-Differential Privacy to certify the adaptive composition of multiple steps. For the first time, our theory covers the sound adaptive composition of general and high-dimensional functions of noisy inputs. We instantiate ARS on deep image classification to certify predictions against adversarial examples of bounded $L_{\infty}$ norm. In the $L_{\infty}$ threat model, ARS enables flexible adaptation through high-dimensional input-dependent masking. We design adaptivity benchmarks, based on CIFAR-10 and CelebA, and show that ARS improves standard test accuracy by $1$ to $15\%$ points. On ImageNet, ARS improves certified test accuracy by up to $1.6\%$ points over standard RS without adaptivity. Our code is available at https://github.com/ubc-systopia/adaptive-randomized-smoothing .
Related papers
- Adaptive Diffusion Denoised Smoothing : Certified Robustness via Randomized Smoothing with Differentially Private Guided Denoising Diffusion [6.003113715347812]
We propose Adaptive Diffusion Denoised Smoothing, a method for certifying the predictions of a vision model against adversarial examples.<n>We show that these adaptive mechanisms can be composed through a GDP privacy filter to analyze the end-to-end robustness of the guided denoising process.
arXiv Detail & Related papers (2025-07-10T20:52:22Z) - One Sample is Enough to Make Conformal Prediction Robust [53.78604391939934]
We show that conformal prediction attains some robustness even with a forward pass on a single randomly perturbed input.<n>Our approach returns robust sets with smaller average set size compared to SOTA methods which use many (e.g. around 100) passes per input.
arXiv Detail & Related papers (2025-06-19T19:14:25Z) - Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness [21.394217131341932]
We introduce a novel certifying adapters framework (CAF) that enables and enhances the certification of adversarial robustness.
CAF achieves improved certified accuracies when compared to methods based on random or denoised smoothing.
An ensemble of adapters enables a single pre-trained feature extractor to defend against a range of noise perturbation scales.
arXiv Detail & Related papers (2024-05-25T03:18:52Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers
via Randomized Deletion [23.309600117618025]
We adapt randomized smoothing for discrete sequence classifiers to provide certified robustness against edit distance-bounded adversaries.
Our proof of certification deviates from the established Neyman-Pearson approach, which is intractable in our setting, and is instead organized around longest common subsequences.
When applied to the popular MalConv malware detection model, our smoothing mechanism RS-Del achieves a certified accuracy of 91% at an edit distance radius of 128 bytes.
arXiv Detail & Related papers (2023-01-31T01:40:26Z) - Double Sampling Randomized Smoothing [19.85592163703077]
We propose a Double Sampling Randomized Smoothing framework.
It exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier.
We show that DSRS certifies larger robust radii than existing datasets consistently under different settings.
arXiv Detail & Related papers (2022-06-16T04:34:28Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - On the robustness of randomized classifiers to adversarial examples [11.359085303200981]
We introduce a new notion of robustness for randomized classifiers, enforcing local Lipschitzness using probability metrics.
We show that our results are applicable to a wide range of machine learning models under mild hypotheses.
All robust models we trained models can simultaneously achieve state-of-the-art accuracy.
arXiv Detail & Related papers (2021-02-22T10:16:58Z) - Adversarially Robust Classifier with Covariate Shift Adaptation [25.39995678746662]
Existing adversarially trained models typically perform inference on test examples independently from each other.
We show that simple adaptive batch normalization (BN) technique can significantly improve the robustness of these models for any random perturbations.
We further demonstrate that adaptive BN technique significantly improves robustness against common corruptions, while often enhancing performance against adversarial attacks.
arXiv Detail & Related papers (2021-02-09T19:51:56Z) - Almost Tight L0-norm Certified Robustness of Top-k Predictions against
Adversarial Perturbations [78.23408201652984]
Top-k predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches.
Our work is based on randomized smoothing, which builds a provably robust classifier via randomizing an input.
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
arXiv Detail & Related papers (2020-11-15T21:34:44Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.