Deterministic Certification to Adversarial Attacks via Bernstein
Polynomial Approximation
- URL: http://arxiv.org/abs/2011.14085v1
- Date: Sat, 28 Nov 2020 08:27:42 GMT
- Title: Deterministic Certification to Adversarial Attacks via Bernstein
Polynomial Approximation
- Authors: Ching-Chia Kao, Jhe-Bang Ko, Chun-Shien Lu
- Abstract summary: Randomized smoothing has established state-of-the-art provable robustness against $ell$ norm adversarial attacks with high probability.
We come up with a question, "Is it possible to construct a smoothed classifier without randomization while maintaining natural accuracy?"
Our method provides a deterministic algorithm for decision boundary smoothing.
We also introduce a distinctive approach of norm-independent certified robustness via numerical solutions of nonlinear systems of equations.
- Score: 5.392822954974537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing has established state-of-the-art provable robustness
against $\ell_2$ norm adversarial attacks with high probability. However, the
introduced Gaussian data augmentation causes a severe decrease in natural
accuracy. We come up with a question, "Is it possible to construct a smoothed
classifier without randomization while maintaining natural accuracy?". We find
the answer is definitely yes. We study how to transform any classifier into a
certified robust classifier based on a popular and elegant mathematical tool,
Bernstein polynomial. Our method provides a deterministic algorithm for
decision boundary smoothing. We also introduce a distinctive approach of
norm-independent certified robustness via numerical solutions of nonlinear
systems of equations. Theoretical analyses and experimental results indicate
that our method is promising for classifier smoothing and robustness
certification.
Related papers
- The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Certifying Confidence via Randomized Smoothing [151.67113334248464]
Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems.
Most smoothing methods do not give us any information about the confidence with which the underlying classifier makes a prediction.
We propose a method to generate certified radii for the prediction confidence of the smoothed classifier.
arXiv Detail & Related papers (2020-09-17T04:37:26Z) - Extensions and limitations of randomized smoothing for robustness
guarantees [13.37805637358556]
We study how the choice of divergence between smoothing measures affects the final robustness guarantee.
We develop a method to certify robustness against any $ell_p$ ($pinmathbbN_>0$) minimized adversarial perturbation.
arXiv Detail & Related papers (2020-06-07T17:22:32Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.