Adversarial robustness via robust low rank representations
- URL: http://arxiv.org/abs/2007.06555v2
- Date: Sat, 1 Aug 2020 04:25:25 GMT
- Title: Adversarial robustness via robust low rank representations
- Authors: Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan
Vijayaraghavan
- Abstract summary: In this work we highlight the benefits of natural low rank representations that often exist for real data such as images.
We exploit low rank data representations to provide improved guarantees over state-of-the-art randomized smoothing-based approaches.
Our second contribution is for the more challenging setting of certified robustness to perturbations measured in $ell_infty$ norm.
- Score: 44.41534627858075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial robustness measures the susceptibility of a classifier to
imperceptible perturbations made to the inputs at test time. In this work we
highlight the benefits of natural low rank representations that often exist for
real data such as images, for training neural networks with certified
robustness guarantees.
Our first contribution is for certified robustness to perturbations measured
in $\ell_2$ norm. We exploit low rank data representations to provide improved
guarantees over state-of-the-art randomized smoothing-based approaches on
standard benchmark datasets such as CIFAR-10 and CIFAR-100.
Our second contribution is for the more challenging setting of certified
robustness to perturbations measured in $\ell_\infty$ norm. We demonstrate
empirically that natural low rank representations have inherent robustness
properties, that can be leveraged to provide significantly better guarantees
for certified robustness to $\ell_\infty$ perturbations in those
representations. Our certificate of $\ell_\infty$ robustness relies on a
natural quantity involving the $\infty \to 2$ matrix operator norm associated
with the representation, to translate robustness guarantees from $\ell_2$ to
$\ell_\infty$ perturbations.
A key technical ingredient for our certification guarantees is a fast
algorithm with provable guarantees based on the multiplicative weights update
method to provide upper bounds on the above matrix norm. Our algorithmic
guarantees improve upon the state of the art for this problem, and may be of
independent interest.
Related papers
- Certified Robustness against Sparse Adversarial Perturbations via Data Localization [39.883465335244594]
We show that a simple classifier emerges from our theory, dubbed Box-NN, which naturally incorporates the geometry of the problem and improves upon the current state-of-the-art in certified robustness against sparse attacks for the MNIST and Fashion-MNIST datasets.
arXiv Detail & Related papers (2024-05-23T05:02:00Z) - Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness [19.380453459873298]
Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations.
We show that these guarantees can be invalidated due to limitations of floating-point representation that cause rounding errors.
We show that the attack can be carried out against linear classifiers that have exact certifiable guarantees and against neural networks that have conservative certifications.
arXiv Detail & Related papers (2022-05-20T13:07:36Z) - Robust and Accurate -- Compositional Architectures for Randomized
Smoothing [5.161531917413708]
We propose a compositional architecture, ACES, which certifiably decides on a per-sample basis whether to use a smoothed model yielding predictions with guarantees or a more accurate standard model without guarantees.
This, in contrast to prior approaches, enables both high standard accuracies and significant provable robustness.
arXiv Detail & Related papers (2022-04-01T14:46:25Z) - Certifiably Robust Interpretation via Renyi Differential Privacy [77.04377192920741]
We study the problem of interpretation robustness from a new perspective of Renyi differential privacy (RDP)
First, it can offer provable and certifiable top-$k$ robustness.
Second, our proposed method offers $sim10%$ better experimental robustness than existing approaches.
Third, our method can provide a smooth tradeoff between robustness and computational efficiency.
arXiv Detail & Related papers (2021-07-04T06:58:01Z) - Almost Tight L0-norm Certified Robustness of Top-k Predictions against
Adversarial Perturbations [78.23408201652984]
Top-k predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches.
Our work is based on randomized smoothing, which builds a provably robust classifier via randomizing an input.
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
arXiv Detail & Related papers (2020-11-15T21:34:44Z) - Higher-Order Certification for Randomized Smoothing [78.00394805536317]
We propose a framework to improve the certified safety region for smoothed classifiers.
We provide a method to calculate the certified safety region using $0th$-order and $1st$-order information.
We also provide a framework that generalizes the calculation for certification using higher-order information.
arXiv Detail & Related papers (2020-10-13T19:35:48Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Towards Assessment of Randomized Smoothing Mechanisms for Certifying
Adversarial Robustness [50.96431444396752]
We argue that the main difficulty is how to assess the appropriateness of each randomized mechanism.
We first conclude that the Gaussian mechanism is indeed an appropriate option to certify $ell$-norm.
Surprisingly, we show that the Gaussian mechanism is also an appropriate option for certifying $ell_infty$-norm, instead of the Exponential mechanism.
arXiv Detail & Related papers (2020-05-15T03:54:53Z) - Regularized Training and Tight Certification for Randomized Smoothed
Classifier with Provable Robustness [15.38718018477333]
We derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart.
We also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability.
arXiv Detail & Related papers (2020-02-17T20:54:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.