Certified Defense to Image Transformations via Randomized Smoothing
- URL: http://arxiv.org/abs/2002.12463v4
- Date: Wed, 25 Aug 2021 07:23:36 GMT
- Title: Certified Defense to Image Transformations via Randomized Smoothing
- Authors: Marc Fischer, Maximilian Baader, Martin Vechev
- Abstract summary: We extend randomized smoothing to cover transformations (e.g., rotations, translations) and certify in the parameter space.
This is particularly challenging as perturbed and rounding effects mean that image transformations do not compose.
We show how individual certificates can be obtained via either statistical error bounds or efficient inverse computation of the image transformation.
- Score: 13.482057387436342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We extend randomized smoothing to cover parameterized transformations (e.g.,
rotations, translations) and certify robustness in the parameter space (e.g.,
rotation angle). This is particularly challenging as interpolation and rounding
effects mean that image transformations do not compose, in turn preventing
direct certification of the perturbed image (unlike certification with $\ell^p$
norms). We address this challenge by introducing three different kinds of
defenses, each with a different guarantee (heuristic, distributional and
individual) stemming from the method used to bound the interpolation error.
Importantly, we show how individual certificates can be obtained via either
statistical error bounds or efficient online inverse computation of the image
transformation. We provide an implementation of all methods at
https://github.com/eth-sri/transformation-smoothing.
Related papers
- Fine-grained Image-to-LiDAR Contrastive Distillation with Visual Foundation Models [55.99654128127689]
Visual Foundation Models (VFMs) are used to enhance 3D representation learning.
VFMs generate semantic labels for weakly-supervised pixel-to-point contrastive distillation.
We adapt sampling probabilities of points to address imbalances in spatial distribution and category frequency.
arXiv Detail & Related papers (2024-05-23T07:48:19Z) - General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing [5.5855074442298696]
We propose emphGeneral Lipschitz (GL), a new framework to certify neural networks against composable resolvable semantic perturbations.
Our method performs comparably to state-of-the-art approaches on the ImageNet dataset.
arXiv Detail & Related papers (2023-08-17T14:39:24Z) - ParGAN: Learning Real Parametrizable Transformations [50.51405390150066]
We propose ParGAN, a generalization of the cycle-consistent GAN framework to learn image transformations.
The proposed generator takes as input both an image and a parametrization of the transformation.
We show how, with disjoint image domains with no annotated parametrization, our framework can create smooths as well as learn multiple transformations simultaneously.
arXiv Detail & Related papers (2022-11-09T16:16:06Z) - GSmooth: Certified Robustness against Semantic Transformations via
Generalized Randomized Smoothing [40.38555458216436]
We propose a unified theoretical framework for certifying robustness against general semantic transformations.
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
arXiv Detail & Related papers (2022-06-09T07:12:17Z) - Rigidity Preserving Image Transformations and Equivariance in
Perspective [15.261790674845562]
We characterize the class of image plane transformations which realize rigid camera motions and call these transformations rigidity preserving'
In particular, 2D translations of pinhole images are not rigidity preserving.
arXiv Detail & Related papers (2022-01-31T08:43:10Z) - Quantised Transforming Auto-Encoders: Achieving Equivariance to
Arbitrary Transformations in Deep Networks [23.673155102696338]
Convolutional Neural Networks (CNNs) are equivariant to image translation.
We propose an auto-encoder architecture whose embedding obeys an arbitrary set of equivariance relations simultaneously.
We demonstrate results of successful re-rendering of transformed versions of input images on several datasets.
arXiv Detail & Related papers (2021-11-25T02:26:38Z) - XCiT: Cross-Covariance Image Transformers [73.33400159139708]
We propose a "transposed" version of self-attention that operates across feature channels rather than tokens.
The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images.
arXiv Detail & Related papers (2021-06-17T17:33:35Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Permuted AdaIN: Reducing the Bias Towards Global Statistics in Image
Classification [97.81205777897043]
Recent work has shown that convolutional neural network classifiers overly rely on texture at the expense of shape cues.
We make a similar but different distinction between shape and local image cues, on the one hand, and global image statistics, on the other.
Our method, called Permuted Adaptive Instance Normalization (pAdaIN), reduces the representation of global statistics in the hidden layers of image classifiers.
arXiv Detail & Related papers (2020-10-09T16:38:38Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.