GSmooth: Certified Robustness against Semantic Transformations via
Generalized Randomized Smoothing
- URL: http://arxiv.org/abs/2206.04310v1
- Date: Thu, 9 Jun 2022 07:12:17 GMT
- Title: GSmooth: Certified Robustness against Semantic Transformations via
Generalized Randomized Smoothing
- Authors: Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu, Jian
Song
- Abstract summary: We propose a unified theoretical framework for certifying robustness against general semantic transformations.
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
- Score: 40.38555458216436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Certified defenses such as randomized smoothing have shown promise towards
building reliable machine learning systems against $\ell_p$-norm bounded
attacks. However, existing methods are insufficient or unable to provably
defend against semantic transformations, especially those without closed-form
expressions (such as defocus blur and pixelate), which are more common in
practice and often unrestricted. To fill up this gap, we propose generalized
randomized smoothing (GSmooth), a unified theoretical framework for certifying
robustness against general semantic transformations via a novel dimension
augmentation strategy. Under the GSmooth framework, we present a scalable
algorithm that uses a surrogate image-to-image network to approximate the
complex transformation. The surrogate model provides a powerful tool for
studying the properties of semantic transformations and certifying robustness.
Experimental results on several datasets demonstrate the effectiveness of our
approach for robustness certification against multiple kinds of semantic
transformations and corruptions, which is not achievable by the alternative
baselines.
Related papers
- Hierarchical Randomized Smoothing [94.59984692215426]
Randomized smoothing is a powerful framework for making models provably robust against small changes to their inputs.
We introduce hierarchical randomized smoothing: We partially smooth objects by adding random noise only on a randomly selected subset of their entities.
We experimentally demonstrate the importance of hierarchical smoothing in image and node classification, where it yields superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-10-24T22:24:44Z) - General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing [5.5855074442298696]
We propose emphGeneral Lipschitz (GL), a new framework to certify neural networks against composable resolvable semantic perturbations.
Our method performs comparably to state-of-the-art approaches on the ImageNet dataset.
arXiv Detail & Related papers (2023-08-17T14:39:24Z) - Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks [39.51297217854375]
We propose Text-CRS, a certified robustness framework for natural language processing (NLP) based on randomized smoothing.
We show that Text-CRS can address all four different word-level adversarial operations and achieve a significant accuracy improvement.
We also provide the first benchmark on certified accuracy and radius of four word-level operations, besides outperforming the state-of-the-art certification against synonym substitution attacks.
arXiv Detail & Related papers (2023-07-31T13:08:16Z) - TPC: Transformation-Specific Smoothing for Point Cloud Models [9.289813586197882]
We propose a transformation-specific smoothing framework TPC, which provides robustness guarantees for point cloud models against semantic transformation attacks.
Experiments on several common 3D transformations show that TPC significantly outperforms the state of the art.
arXiv Detail & Related papers (2022-01-30T05:41:50Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Orthogonalizing Convolutional Layers with the Cayley Transform [83.73855414030646]
We propose and evaluate an alternative approach to parameterize convolutional layers that are constrained to be orthogonal.
We show that our method indeed preserves orthogonality to a high degree even for large convolutions.
arXiv Detail & Related papers (2021-04-14T23:54:55Z) - Consistent Non-Parametric Methods for Adaptive Robustness [26.016647703500887]
A major drawback of the standard robust learning framework is the imposition of an artificial robustness radius $r$ that applies to all inputs.
We propose a new framework for adaptive robustness, called neighborhood preserving robustness.
arXiv Detail & Related papers (2021-02-18T00:44:07Z) - TSS: Transformation-Specific Smoothing for Robustness Certification [37.87602431929278]
Motivated adversaries can mislead machine learning systems by perturbing test data using semantic transformations.
We provide TSS -- a unified framework for certifying ML robustness against general adversarial semantic transformations.
We show TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2020-02-27T19:19:32Z) - Robustness Verification for Transformers [165.25112192811764]
We develop the first robustness verification algorithm for Transformers.
The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound propagation.
These bounds also shed light on interpreting Transformers as they consistently reflect the importance of different words in sentiment analysis.
arXiv Detail & Related papers (2020-02-16T17:16:31Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.