Certified Robustness via Randomized Smoothing over Multiplicative
Parameters
- URL: http://arxiv.org/abs/2106.14432v1
- Date: Mon, 28 Jun 2021 07:35:15 GMT
- Title: Certified Robustness via Randomized Smoothing over Multiplicative
Parameters
- Authors: Nikita Muravev, Aleksandr Petiushko
- Abstract summary: We construct certifiably robust classifiers with respect to a gamma-correction perturbation.
To the best of our knowledge it is the first work concerning certified robustness against the multiplicative gamma-correction transformation.
- Score: 78.42152902652215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel approach of randomized smoothing over multiplicative
parameters. Using this method we construct certifiably robust classifiers with
respect to a gamma-correction perturbation and compare the result with
classifiers obtained via Gaussian smoothing. To the best of our knowledge it is
the first work concerning certified robustness against the multiplicative
gamma-correction transformation.
Related papers
- Accelerated Smoothing: A Scalable Approach to Randomized Smoothing [4.530339602471495]
We propose a novel approach that replaces Monte Carlo sampling with the training of a surrogate neural network.
We show that our approach significantly accelerates the robust radius certification process, providing nearly $600$X improvement in time.
arXiv Detail & Related papers (2024-02-12T09:07:54Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.