On the Interplay of Convolutional Padding and Adversarial Robustness
- URL: http://arxiv.org/abs/2308.06612v1
- Date: Sat, 12 Aug 2023 17:06:48 GMT
- Title: On the Interplay of Convolutional Padding and Adversarial Robustness
- Authors: Paul Gavrikov and Janis Keuper
- Abstract summary: We show that adversarial attacks often result in perturbation anomalies at the image boundaries, which are the areas where padding is used.
We seek an answer to the question of how different padding modes (or their absence) affect adversarial robustness in various scenarios.
- Score: 16.306183236605364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is common practice to apply padding prior to convolution operations to
preserve the resolution of feature-maps in Convolutional Neural Networks (CNN).
While many alternatives exist, this is often achieved by adding a border of
zeros around the inputs. In this work, we show that adversarial attacks often
result in perturbation anomalies at the image boundaries, which are the areas
where padding is used. Consequently, we aim to provide an analysis of the
interplay between padding and adversarial attacks and seek an answer to the
question of how different padding modes (or their absence) affect adversarial
robustness in various scenarios.
Related papers
- Exploring Robust Features for Improving Adversarial Robustness [11.935612873688122]
We explore the robust features which are not affected by the adversarial perturbations to improve the model's adversarial robustness.
Specifically, we propose a feature disentanglement model to segregate the robust features from non-robust features and domain specific features.
The trained domain discriminator is able to identify the domain specific features from the clean images and adversarial examples almost perfectly.
arXiv Detail & Related papers (2023-09-09T00:30:04Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Context-aware Padding for Semantic Segmentation [82.37483350347559]
We propose a context-aware (CA) padding approach to extend the image.
Using context-aware padding, the ResNet-based segmentation model achieves higher mean Intersection-Over-Union than the traditional zero padding.
arXiv Detail & Related papers (2021-09-16T10:33:21Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Adversarial Robustness Across Representation Spaces [35.58913661509278]
Adversa robustness corresponds to the susceptibility of deep neural networks to imperceptible perturbations made at test time.
In this work we extend the setting to consider the problem of training of deep neural networks that can be made simultaneously robust to perturbations applied in multiple natural representation spaces.
arXiv Detail & Related papers (2020-12-01T19:55:58Z) - Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations [83.60161052867534]
We analyze adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Our results suggest a new perspective towards the relationship between images and universal perturbations.
We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.
arXiv Detail & Related papers (2020-07-13T05:00:09Z) - D-square-B: Deep Distribution Bound for Natural-looking Adversarial
Attack [19.368450129985423]
We propose a novel technique that can generate natural-looking adversarial examples by bounding variations induced for internal activation values in some deep layer(s)
By bounding model internals instead of individual pixels, our attack admits perturbations closely coupled with existing features of the original input.
Our attack can achieve the same attack success/confidence level while having much more natural-looking adversarial perturbations.
arXiv Detail & Related papers (2020-06-12T15:14:28Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z) - Generating Semantic Adversarial Examples via Feature Manipulation [23.48763375455514]
We propose a more practical adversarial attack by designing structured perturbation with semantic meanings.
Our proposed technique manipulates the semantic attributes of images via the disentangled latent codes.
We demonstrate the existence of a universal, image-agnostic semantic adversarial example.
arXiv Detail & Related papers (2020-01-06T06:28:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.