Towards Assessment of Randomized Smoothing Mechanisms for Certifying
Adversarial Robustness
- URL: http://arxiv.org/abs/2005.07347v3
- Date: Sun, 7 Jun 2020 18:39:33 GMT
- Title: Towards Assessment of Randomized Smoothing Mechanisms for Certifying
Adversarial Robustness
- Authors: Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
- Abstract summary: We argue that the main difficulty is how to assess the appropriateness of each randomized mechanism.
We first conclude that the Gaussian mechanism is indeed an appropriate option to certify $ell$-norm.
Surprisingly, we show that the Gaussian mechanism is also an appropriate option for certifying $ell_infty$-norm, instead of the Exponential mechanism.
- Score: 50.96431444396752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a certified defensive technique, randomized smoothing has received
considerable attention due to its scalability to large datasets and neural
networks. However, several important questions remain unanswered, such as (i)
whether the Gaussian mechanism is an appropriate option for certifying
$\ell_2$-norm robustness, and (ii) whether there is an appropriate randomized
(smoothing) mechanism to certify $\ell_\infty$-norm robustness. To shed light
on these questions, we argue that the main difficulty is how to assess the
appropriateness of each randomized mechanism. In this paper, we propose a
generic framework that connects the existing frameworks in
\cite{lecuyer2018certified, li2019certified}, to assess randomized mechanisms.
Under our framework, for a randomized mechanism that can certify a certain
extent of robustness, we define the magnitude of its required additive noise as
the metric for assessing its appropriateness. We also prove lower bounds on
this metric for the $\ell_2$-norm and $\ell_\infty$-norm cases as the criteria
for assessment. Based on our framework, we assess the Gaussian and Exponential
mechanisms by comparing the magnitude of additive noise required by these
mechanisms and the lower bounds (criteria). We first conclude that the Gaussian
mechanism is indeed an appropriate option to certify $\ell_2$-norm robustness.
Surprisingly, we show that the Gaussian mechanism is also an appropriate option
for certifying $\ell_\infty$-norm robustness, instead of the Exponential
mechanism. Finally, we generalize our framework to $\ell_p$-norm for any
$p\geq2$. Our theoretical findings are verified by evaluations on CIFAR10 and
ImageNet.
Related papers
- Adaptive Data Analysis in a Balanced Adversarial Model [26.58630744414181]
In adaptive data analysis, a mechanism gets $n$ i.i.d. samples from an unknown distribution $D$, and is required to provide accurate estimations.
We consider more restricted adversaries, called emphbalanced, where each such adversary consists of two separated algorithms.
We show that these stronger hardness assumptions are unavoidable in the sense that any computationally bounded emphbalanced adversary implies the existence of public-key cryptography.
arXiv Detail & Related papers (2023-05-24T15:08:05Z) - A Robustness Analysis of Blind Source Separation [91.3755431537592]
Blind source separation (BSS) aims to recover an unobserved signal from its mixture $X=f(S)$ under the condition that the transformation $f$ is invertible but unknown.
We present a general framework for analysing such violations and quantifying their impact on the blind recovery of $S$ from $X$.
We show that a generic BSS-solution in response to general deviations from its defining structural assumptions can be profitably analysed in the form of explicit continuity guarantees.
arXiv Detail & Related papers (2023-03-17T16:30:51Z) - Constrained Pure Exploration Multi-Armed Bandits with a Fixed Budget [4.226118870861363]
We consider a constrained, pure exploration, multi-armed bandit formulation under a fixed budget.
We propose an algorithm called textscConstrained-SR based on the Successive Rejects framework.
We show that the associated decay rate is nearly optimal relative to an information theoretic lower bound in certain special cases.
arXiv Detail & Related papers (2022-11-27T08:58:16Z) - Random Rank: The One and Only Strategyproof and Proportionally Fair
Randomized Facility Location Mechanism [103.36492220921109]
We show that although Strong Proportionality is a well-motivated and basic axiom, there is no deterministic strategyproof mechanism satisfying the property.
We then identify a randomized mechanism called Random Rank which satisfies Strong Proportionality in expectation.
Our main characterizes Random Rank as the unique mechanism that achieves universal truthfulness, universal anonymity, and Strong Proportionality in expectation.
arXiv Detail & Related papers (2022-05-30T00:51:57Z) - Almost Tight L0-norm Certified Robustness of Top-k Predictions against
Adversarial Perturbations [78.23408201652984]
Top-k predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches.
Our work is based on randomized smoothing, which builds a provably robust classifier via randomizing an input.
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
arXiv Detail & Related papers (2020-11-15T21:34:44Z) - Adversarial robustness via robust low rank representations [44.41534627858075]
In this work we highlight the benefits of natural low rank representations that often exist for real data such as images.
We exploit low rank data representations to provide improved guarantees over state-of-the-art randomized smoothing-based approaches.
Our second contribution is for the more challenging setting of certified robustness to perturbations measured in $ell_infty$ norm.
arXiv Detail & Related papers (2020-07-13T17:57:00Z) - Sharp Statistical Guarantees for Adversarially Robust Gaussian
Classification [54.22421582955454]
We provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification.
Results are stated in terms of the Adversarial Signal-to-Noise Ratio (AdvSNR), which generalizes a similar notion for standard linear classification to the adversarial setting.
arXiv Detail & Related papers (2020-06-29T21:06:52Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Estimating Principal Components under Adversarial Perturbations [25.778123431786653]
We study a natural model of robustness for high-dimensional statistical estimation problems.
Our model is motivated by emerging paradigms such as low precision machine learning and adversarial training.
arXiv Detail & Related papers (2020-05-31T20:27:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.