Smooth-Reduce: Leveraging Patches for Improved Certified Robustness
- URL: http://arxiv.org/abs/2205.06154v1
- Date: Thu, 12 May 2022 15:26:20 GMT
- Title: Smooth-Reduce: Leveraging Patches for Improved Certified Robustness
- Authors: Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J.
Zico Kolter, Chinmay Hegde
- Abstract summary: We propose a training-free, modified smoothing approach, Smooth-Reduce.
Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input.
We provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods.
- Score: 100.28947222215463
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized smoothing (RS) has been shown to be a fast, scalable technique for
certifying the robustness of deep neural network classifiers. However, methods
based on RS require augmenting data with large amounts of noise, which leads to
significant drops in accuracy. We propose a training-free, modified smoothing
approach, Smooth-Reduce, that leverages patching and aggregation to provide
improved classifier certificates. Our algorithm classifies overlapping patches
extracted from an input image, and aggregates the predicted logits to certify a
larger radius around the input. We study two aggregation schemes -- max and
mean -- and show that both approaches provide better certificates in terms of
certified accuracy, average certified radii and abstention rates as compared to
concurrent approaches. We also provide theoretical guarantees for such
certificates, and empirically show significant improvements over other
randomized smoothing methods that require expensive retraining. Further, we
extend our approach to videos and provide meaningful certificates for video
classifiers. A project page can be found at
https://nyu-dice-lab.github.io/SmoothReduce/
Related papers
- Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness [21.394217131341932]
We introduce a novel certifying adapters framework (CAF) that enables and enhances the certification of adversarial robustness.
CAF achieves improved certified accuracies when compared to methods based on random or denoised smoothing.
An ensemble of adapters enables a single pre-trained feature extractor to defend against a range of noise perturbation scales.
arXiv Detail & Related papers (2024-05-25T03:18:52Z) - Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Double Bubble, Toil and Trouble: Enhancing Certified Robustness through
Transitivity [27.04033198073254]
In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.
We show how today's "optimal" certificates can be improved by exploiting both the transitivity of certifications, and the geometry of the input space.
Our technique shows even more promising results, with a uniform $4$ percentage point increase in the achieved certified radius.
arXiv Detail & Related papers (2022-10-12T10:42:21Z) - Higher-Order Certification for Randomized Smoothing [78.00394805536317]
We propose a framework to improve the certified safety region for smoothed classifiers.
We provide a method to calculate the certified safety region using $0th$-order and $1st$-order information.
We also provide a framework that generalizes the calculation for certification using higher-order information.
arXiv Detail & Related papers (2020-10-13T19:35:48Z) - Certifying Confidence via Randomized Smoothing [151.67113334248464]
Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems.
Most smoothing methods do not give us any information about the confidence with which the underlying classifier makes a prediction.
We propose a method to generate certified radii for the prediction confidence of the smoothed classifier.
arXiv Detail & Related papers (2020-09-17T04:37:26Z) - Black-Box Certification with Randomized Smoothing: A Functional
Optimization Based Framework [60.981406394238434]
We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks.
Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
arXiv Detail & Related papers (2020-02-21T07:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.