Data Dependent Randomized Smoothing
- URL: http://arxiv.org/abs/2012.04351v1
- Date: Tue, 8 Dec 2020 10:53:11 GMT
- Title: Data Dependent Randomized Smoothing
- Authors: Motasem Alfarra, Adel Bibi, Philip H. S. Torr, and Bernard Ghanem
- Abstract summary: We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
- Score: 127.34833801660233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing is a recent technique that achieves state-of-art
performance in training certifiably robust deep neural networks. While the
smoothing family of distributions is often connected to the choice of the norm
used for certification, the parameters of the distributions are always set as
global hyper parameters independent of the input data on which a network is
certified. In this work, we revisit Gaussian randomized smoothing where we show
that the variance of the Gaussian distribution can be optimized at each input
so as to maximize the certification radius for the construction of the smoothed
classifier. This new approach is generic, parameter-free, and easy to
implement. In fact, we show that our data dependent framework can be seamlessly
incorporated into 3 randomized smoothing approaches, leading to consistent
improved certified accuracy. When this framework is used in the training
routine of these approaches followed by a data dependent certification, we get
9% and 6% improvement over the certified accuracy of the strongest baseline for
a radius of 0.5 on CIFAR10 and ImageNet, respectively.
Related papers
- The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Towards Large Certified Radius in Randomized Smoothing using
Quasiconcave Optimization [3.5133481941064164]
In this work, we show that by exploiting a quasi fixed problem structure, we can find the optimal certified radii for most data points with slight computational overhead.
This leads to an efficient and effective input-specific randomized smoothing algorithm.
arXiv Detail & Related papers (2023-02-01T03:25:43Z) - Smooth-Reduce: Leveraging Patches for Improved Certified Robustness [100.28947222215463]
We propose a training-free, modified smoothing approach, Smooth-Reduce.
Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input.
We provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods.
arXiv Detail & Related papers (2022-05-12T15:26:20Z) - Simpler Certified Radius Maximization by Propagating Covariances [39.851641822878996]
We show an algorithm for maximizing the certified radius on datasets including Cifar-10, ImageNet, and Places365.
We show how satisfying these criteria yields an algorithm for maximizing the certified radius on datasets with moderate depth, with a small compromise in overall accuracy.
arXiv Detail & Related papers (2021-04-13T01:38:36Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and
Accuracy [9.50143683501477]
Insta-RS is a multiple-start search algorithm that assigns customized Gaussian variances to test examples.
Insta-RS Train is a novel two-stage training algorithm that adaptively adjusts and customizes the noise level of each training example.
We show that our method significantly enhances the average certified radius (ACR) as well as the clean data accuracy.
arXiv Detail & Related papers (2021-03-07T19:46:07Z) - Tight Second-Order Certificates for Randomized Smoothing [106.06908242424481]
We show that there also exists a universal curvature-like bound for Gaussian random smoothing.
In addition to proving the correctness of this novel certificate, we show that SoS certificates are realizable and therefore tight.
arXiv Detail & Related papers (2020-10-20T18:03:45Z) - Certifying Confidence via Randomized Smoothing [151.67113334248464]
Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems.
Most smoothing methods do not give us any information about the confidence with which the underlying classifier makes a prediction.
We propose a method to generate certified radii for the prediction confidence of the smoothed classifier.
arXiv Detail & Related papers (2020-09-17T04:37:26Z) - Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the
Predictive Uncertainties [12.068153197381575]
We propose a novel variational family that allows for retaining covariances between latent processes while achieving fast convergence.
We provide an efficient implementation of our new approach and apply it to several benchmark datasets.
It yields excellent results and strikes a better balance between accuracy and calibrated uncertainty estimates than its state-of-the-art alternatives.
arXiv Detail & Related papers (2020-05-22T11:10:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.