Estimating the Robustness Radius for Randomized Smoothing with 100$\times$ Sample Efficiency
- URL: http://arxiv.org/abs/2404.17371v1
- Date: Fri, 26 Apr 2024 12:43:19 GMT
- Title: Estimating the Robustness Radius for Randomized Smoothing with 100$\times$ Sample Efficiency
- Authors: Emmanouil Seferis, Stefanos Kollias, Chih-Hong Cheng,
- Abstract summary: This work demonstrates that reducing the number of samples by one or two orders of magnitude can still enable the computation of a slightly smaller robustness radius.
We provide the mathematical foundation for explaining the phenomenon while experimentally showing promising results on the standard CIFAR-10 and ImageNet datasets.
- Score: 6.199300239433395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing (RS) has successfully been used to improve the robustness of predictions for deep neural networks (DNNs) by adding random noise to create multiple variations of an input, followed by deciding the consensus. To understand if an RS-enabled DNN is effective in the sampled input domains, it is mandatory to sample data points within the operational design domain, acquire the point-wise certificate regarding robustness radius, and compare it with pre-defined acceptance criteria. Consequently, ensuring that a point-wise robustness certificate for any given data point is obtained relatively cost-effectively is crucial. This work demonstrates that reducing the number of samples by one or two orders of magnitude can still enable the computation of a slightly smaller robustness radius (commonly ~20% radius reduction) with the same confidence. We provide the mathematical foundation for explaining the phenomenon while experimentally showing promising results on the standard CIFAR-10 and ImageNet datasets.
Related papers
- Certified Adversarial Robustness via Partition-based Randomized Smoothing [9.054540533394926]
We propose the Pixel Partitioning-based Randomized Smoothing (PPRS) methodology to boost the neural net's confidence score.
We demonstrate that the proposed PPRS algorithm improves the visibility of the images under additive Gaussian noise.
arXiv Detail & Related papers (2024-09-20T14:41:47Z) - Approximate Thompson Sampling via Epistemic Neural Networks [26.872304174606278]
Epistemic neural networks (ENNs) are designed to produce accurate joint predictive distributions.
We show that ENNs serve this purpose well and illustrate how the quality of joint predictive distributions drives performance.
arXiv Detail & Related papers (2023-02-18T01:58:15Z) - Towards Large Certified Radius in Randomized Smoothing using
Quasiconcave Optimization [3.5133481941064164]
In this work, we show that by exploiting a quasi fixed problem structure, we can find the optimal certified radii for most data points with slight computational overhead.
This leads to an efficient and effective input-specific randomized smoothing algorithm.
arXiv Detail & Related papers (2023-02-01T03:25:43Z) - Double Sampling Randomized Smoothing [19.85592163703077]
We propose a Double Sampling Randomized Smoothing framework.
It exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier.
We show that DSRS certifies larger robust radii than existing datasets consistently under different settings.
arXiv Detail & Related papers (2022-06-16T04:34:28Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Sequential Density Ratio Estimation for Simultaneous Optimization of
Speed and Accuracy [11.470070927586017]
We propose the SPRT-TANDEM, a deep neural network-based SPRT algorithm that overcomes the above two obstacles.
In tests on one original and two public video databases, the SPRT-TANDEM achieves statistically significantly better classification accuracy than other baselines.
arXiv Detail & Related papers (2020-06-10T01:05:00Z) - RAIN: A Simple Approach for Robust and Accurate Image Classification
Networks [156.09526491791772]
It has been shown that the majority of existing adversarial defense methods achieve robustness at the cost of sacrificing prediction accuracy.
This paper proposes a novel preprocessing framework, which we term Robust and Accurate Image classificatioN(RAIN)
RAIN applies randomization over inputs to break the ties between the model forward prediction path and the backward gradient path, thus improving the model robustness.
We conduct extensive experiments on the STL10 and ImageNet datasets to verify the effectiveness of RAIN against various types of adversarial attacks.
arXiv Detail & Related papers (2020-04-24T02:03:56Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.