Evaluating Adversarial Robustness in the Spatial Frequency Domain
- URL: http://arxiv.org/abs/2405.06345v1
- Date: Fri, 10 May 2024 09:20:47 GMT
- Title: Evaluating Adversarial Robustness in the Spatial Frequency Domain
- Authors: Keng-Hsin Liao, Chin-Yuan Yeh, Hsi-Wen Chen, Ming-Syan Chen,
- Abstract summary: Convolutional Neural Networks (CNNs) have dominated the majority of computer vision tasks.
CNNs' vulnerability to adversarial attacks has raised concerns about deploying these models to safety-critical applications.
This paper presents an empirical study exploring the vulnerability of CNN models in the frequency domain.
- Score: 13.200404022208858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional Neural Networks (CNNs) have dominated the majority of computer vision tasks. However, CNNs' vulnerability to adversarial attacks has raised concerns about deploying these models to safety-critical applications. In contrast, the Human Visual System (HVS), which utilizes spatial frequency channels to process visual signals, is immune to adversarial attacks. As such, this paper presents an empirical study exploring the vulnerability of CNN models in the frequency domain. Specifically, we utilize the discrete cosine transform (DCT) to construct the Spatial-Frequency (SF) layer to produce a block-wise frequency spectrum of an input image and formulate Spatial Frequency CNNs (SF-CNNs) by replacing the initial feature extraction layers of widely-used CNN backbones with the SF layer. Through extensive experiments, we observe that SF-CNN models are more robust than their CNN counterparts under both white-box and black-box attacks. To further explain the robustness of SF-CNNs, we compare the SF layer with a trainable convolutional layer with identical kernel sizes using two mixing strategies to show that the lower frequency components contribute the most to the adversarial robustness of SF-CNNs. We believe our observations can guide the future design of robust CNN models.
Related papers
- FrequencyLowCut Pooling -- Plug & Play against Catastrophic Overfitting [12.062691258844628]
This paper introduces an aliasing free down-sampling operation which can easily be plugged into any CNN architecture.
Our experiments show, that in combination with simple and fast FGSM adversarial training, our hyper- parameter free operator significantly improves model robustness.
arXiv Detail & Related papers (2022-04-01T14:51:28Z) - Practical No-box Adversarial Attacks with Training-free Hybrid Image
Transformation [123.33816363589506]
We show the existence of a textbftraining-free adversarial perturbation under the no-box threat model.
Motivated by our observation that high-frequency component (HFC) domains in low-level features, we attack an image mainly by manipulating its frequency components.
Our method is even competitive to mainstream transfer-based black-box attacks.
arXiv Detail & Related papers (2022-03-09T09:51:00Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - CIFS: Improving Adversarial Robustness of CNNs via Channel-wise
Importance-based Feature Selection [186.34889055196925]
We investigate the adversarial robustness of CNNs from the perspective of channel-wise activations.
We observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts.
We introduce a novel mechanism, i.e., underlineChannel-wise underlineImportance-based underlineFeature underlineSelection (CIFS)
arXiv Detail & Related papers (2021-02-10T08:16:43Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Color Channel Perturbation Attacks for Fooling Convolutional Neural
Networks and A Defense Against Such Attacks [16.431689066281265]
The Conalvolutional Neural Networks (CNNs) have emerged as a powerful data dependent hierarchical feature extraction method.
It is observed that the network overfits the training samples very easily.
We propose a Color Channel Perturbation (CCP) attack to fool the CNNs.
arXiv Detail & Related papers (2020-12-20T11:35:29Z) - Extreme Value Preserving Networks [65.2037926048262]
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures.
This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness.
arXiv Detail & Related papers (2020-11-17T02:06:52Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder [11.701729403940798]
We propose an attack-agnostic defence framework to enhance the intrinsic robustness of neural networks.
Our framework applies to all block-based convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-05-06T01:40:26Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.