Robust Universal Adversarial Perturbations
- URL: http://arxiv.org/abs/2206.10858v2
- Date: Tue, 6 Jun 2023 05:16:38 GMT
- Title: Robust Universal Adversarial Perturbations
- Authors: Changming Xu, Gagandeep Singh
- Abstract summary: We introduce and formulate UAPs robust against real-world transformations.
Our results show that our method can generate UAPs up to 23% more robust than state-of-the-art baselines.
- Score: 2.825323579996619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Universal Adversarial Perturbations (UAPs) are imperceptible, image-agnostic
vectors that cause deep neural networks (DNNs) to misclassify inputs with high
probability. In practical attack scenarios, adversarial perturbations may
undergo transformations such as changes in pixel intensity, scaling, etc.
before being added to DNN inputs. Existing methods do not create UAPs robust to
these real-world transformations, thereby limiting their applicability in
practical attack scenarios. In this work, we introduce and formulate UAPs
robust against real-world transformations. We build an iterative algorithm
using probabilistic robustness bounds and construct such UAPs robust to
transformations generated by composing arbitrary sub-differentiable
transformation functions. We perform an extensive evaluation on the popular
CIFAR-10 and ILSVRC 2012 datasets measuring our UAPs' robustness under a wide
range common, real-world transformations such as rotation, contrast changes,
etc. We further show that by using a set of primitive transformations our
method can generalize well to unseen transformations such as fog, JPEG
compression, etc. Our results show that our method can generate UAPs up to 23%
more robust than state-of-the-art baselines.
Related papers
- Texture Re-scalable Universal Adversarial Perturbation [61.33178492209849]
We propose texture scale-constrained UAP, which automatically generates UAPs with category-specific local textures.
TSC-UAP achieves a considerable improvement in the fooling ratio and attack transferability for both data-dependent and data-free UAP methods.
arXiv Detail & Related papers (2024-06-10T08:18:55Z) - Diagnosing and Rectifying Fake OOD Invariance: A Restructured Causal
Approach [51.012396632595554]
Invariant representation learning (IRL) encourages the prediction from invariant causal features to labels de-confounded from the environments.
Recent theoretical results verified that some causal features recovered by IRLs merely pretend domain-invariantly in the training environments but fail in unseen domains.
We develop an approach based on conditional mutual information with respect to RS-SCM, then rigorously rectify the spurious and fake invariant effects.
arXiv Detail & Related papers (2023-12-15T12:58:05Z) - Adversarial and Random Transformations for Robust Domain Adaptation and
Generalization [9.995765847080596]
We show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained.
The combined adversarial and random transformations based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets.
arXiv Detail & Related papers (2022-11-13T02:10:13Z) - f-DM: A Multi-stage Diffusion Model via Progressive Signal
Transformation [56.04628143914542]
Diffusion models (DMs) have recently emerged as SoTA tools for generative modeling in various domains.
We propose f-DM, a generalized family of DMs which allows progressive signal transformation.
We apply f-DM in image generation tasks with a range of functions, including down-sampling, blurring, and learned transformations.
arXiv Detail & Related papers (2022-10-10T18:49:25Z) - GSmooth: Certified Robustness against Semantic Transformations via
Generalized Randomized Smoothing [40.38555458216436]
We propose a unified theoretical framework for certifying robustness against general semantic transformations.
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
arXiv Detail & Related papers (2022-06-09T07:12:17Z) - TPC: Transformation-Specific Smoothing for Point Cloud Models [9.289813586197882]
We propose a transformation-specific smoothing framework TPC, which provides robustness guarantees for point cloud models against semantic transformation attacks.
Experiments on several common 3D transformations show that TPC significantly outperforms the state of the art.
arXiv Detail & Related papers (2022-01-30T05:41:50Z) - OneDConv: Generalized Convolution For Transform-Invariant Representation [76.15687106423859]
We propose a novel generalized one dimension convolutional operator (OneDConv)
It dynamically transforms the convolution kernels based on the input features in a computationally and parametrically efficient manner.
It improves the robustness and generalization of convolution without sacrificing the performance on common images.
arXiv Detail & Related papers (2022-01-15T07:44:44Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Improving Robustness of Adversarial Attacks Using an Affine-Invariant
Gradient Estimator [15.863109283735625]
Adversarial examples can deceive a deep neural network (DNN) by significantly altering its response with imperceptible perturbations.
Most of the existing adversarial examples cannot maintain the malicious functionality if we apply an affine transformation on the resultant examples.
We propose an affine-invariant adversarial attack which can consistently construct adversarial examples robust over a distribution of affine transformation.
arXiv Detail & Related papers (2021-09-13T09:43:17Z) - Data Augmentation via Structured Adversarial Perturbations [25.31035665982414]
We propose a method to generate adversarial examples that maintain some desired natural structure.
We demonstrate this approach through two types of image transformations: photometric and geometric.
arXiv Detail & Related papers (2020-11-05T18:07:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.