General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing
- URL: http://arxiv.org/abs/2309.16710v2
- Date: Fri, 9 Aug 2024 16:17:04 GMT
- Title: General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing
- Authors: Dmitrii Korzh, Mikhail Pautov, Olga Tsymboi, Ivan Oseledets,
- Abstract summary: We propose emphGeneral Lipschitz (GL), a new framework to certify neural networks against composable resolvable semantic perturbations.
Our method performs comparably to state-of-the-art approaches on the ImageNet dataset.
- Score: 5.5855074442298696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized smoothing is the state-of-the-art approach to construct image classifiers that are provably robust against additive adversarial perturbations of bounded magnitude. However, it is more complicated to construct reasonable certificates against semantic transformation (e.g., image blurring, translation, gamma correction) and their compositions. In this work, we propose \emph{General Lipschitz (GL),} a new framework to certify neural networks against composable resolvable semantic perturbations. Within the framework, we analyze transformation-dependent Lipschitz-continuity of smoothed classifiers w.r.t. transformation parameters and derive corresponding robustness certificates. Our method performs comparably to state-of-the-art approaches on the ImageNet dataset.
Related papers
- Verification of Geometric Robustness of Neural Networks via Piecewise Linear Approximation and Lipschitz Optimisation [57.10353686244835]
We address the problem of verifying neural networks against geometric transformations of the input image, including rotation, scaling, shearing, and translation.
The proposed method computes provably sound piecewise linear constraints for the pixel values by using sampling and linear approximations in combination with branch-and-bound Lipschitz.
We show that our proposed implementation resolves up to 32% more verification cases than present approaches.
arXiv Detail & Related papers (2024-08-23T15:02:09Z) - Hierarchical Randomized Smoothing [94.59984692215426]
Randomized smoothing is a powerful framework for making models provably robust against small changes to their inputs.
We introduce hierarchical randomized smoothing: We partially smooth objects by adding random noise only on a randomly selected subset of their entities.
We experimentally demonstrate the importance of hierarchical smoothing in image and node classification, where it yields superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-10-24T22:24:44Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - GSmooth: Certified Robustness against Semantic Transformations via
Generalized Randomized Smoothing [40.38555458216436]
We propose a unified theoretical framework for certifying robustness against general semantic transformations.
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
arXiv Detail & Related papers (2022-06-09T07:12:17Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Orthogonalizing Convolutional Layers with the Cayley Transform [83.73855414030646]
We propose and evaluate an alternative approach to parameterize convolutional layers that are constrained to be orthogonal.
We show that our method indeed preserves orthogonality to a high degree even for large convolutions.
arXiv Detail & Related papers (2021-04-14T23:54:55Z) - Lipschitz Regularized CycleGAN for Improving Semantic Robustness in
Unpaired Image-to-image Translation [19.083671868521918]
For unpaired image-to-image translation tasks, GAN-based approaches are susceptible to semantic flipping.
We propose a novel approach, Lipschitz regularized CycleGAN, for improving semantic robustness.
We evaluate our approach on multiple common datasets and compare with several existing GAN-based methods.
arXiv Detail & Related papers (2020-12-09T09:28:53Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Certified Defense to Image Transformations via Randomized Smoothing [13.482057387436342]
We extend randomized smoothing to cover transformations (e.g., rotations, translations) and certify in the parameter space.
This is particularly challenging as perturbed and rounding effects mean that image transformations do not compose.
We show how individual certificates can be obtained via either statistical error bounds or efficient inverse computation of the image transformation.
arXiv Detail & Related papers (2020-02-27T22:02:32Z) - TSS: Transformation-Specific Smoothing for Robustness Certification [37.87602431929278]
Motivated adversaries can mislead machine learning systems by perturbing test data using semantic transformations.
We provide TSS -- a unified framework for certifying ML robustness against general adversarial semantic transformations.
We show TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2020-02-27T19:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.