Exploiting Frequency Spectrum of Adversarial Images for General
Robustness
- URL: http://arxiv.org/abs/2305.08439v1
- Date: Mon, 15 May 2023 08:36:32 GMT
- Title: Exploiting Frequency Spectrum of Adversarial Images for General
Robustness
- Authors: Chun Yang Tan, Kazuhiko Kawamoto, Hiroshi Kera
- Abstract summary: Adversarial training with an emphasis on phase components significantly improves model performance on clean, adversarial, and common corruption accuracies.
We propose a frequency-based data augmentation method, Adversarial Amplitude Swap, that swaps the amplitude spectrum between clean and adversarial images.
These images act as substitutes for adversarial images and can be implemented in various adversarial training setups.
- Score: 3.480626767752489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been growing concern over the vulnerability of
convolutional neural networks (CNNs) to image perturbations. However, achieving
general robustness against different types of perturbations remains
challenging, in which enhancing robustness to some perturbations (e.g.,
adversarial perturbations) may degrade others (e.g., common corruptions). In
this paper, we demonstrate that adversarial training with an emphasis on phase
components significantly improves model performance on clean, adversarial, and
common corruption accuracies. We propose a frequency-based data augmentation
method, Adversarial Amplitude Swap, that swaps the amplitude spectrum between
clean and adversarial images to generate two novel training images: adversarial
amplitude and adversarial phase images. These images act as substitutes for
adversarial images and can be implemented in various adversarial training
setups. Through extensive experiments, we demonstrate that our method enables
the CNNs to gain general robustness against different types of perturbations
and results in a uniform performance against all types of common corruptions.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Adversarial amplitude swap towards robust image classifiers [3.480626767752489]
We investigate the effect of the amplitude and phase spectra of adversarial images on the robustness of CNN classifiers.
Experiments revealed that the images generated by combining the amplitude spectrum of adversarial images and the phase spectrum of clean images accommodates moderate and general perturbations.
arXiv Detail & Related papers (2022-03-14T14:32:11Z) - Amicable Aid: Perturbing Images to Improve Classification Performance [20.9291591835171]
adversarial perturbation of images to attack deep image classification models pose serious security concerns in practice.
We show that by taking the opposite search direction of perturbation, an image can be modified to yield higher classification confidence.
We investigate the universal amicable aid, i.e., a fixed perturbation can be applied to multiple images to improve their classification results.
arXiv Detail & Related papers (2021-12-09T06:16:08Z) - Amplitude-Phase Recombination: Rethinking Robustness of Convolutional
Neural Networks in Frequency Domain [31.182376196295365]
CNN tends to converge at the local optimum which is closely related to the high-frequency components of the training images.
A new perspective on data augmentation designed by re-combing the phase spectrum of the current image and the amplitude spectrum of the distracter image.
arXiv Detail & Related papers (2021-08-19T04:04:41Z) - Diverse Gaussian Noise Consistency Regularization for Robustness and
Uncertainty Calibration [7.310043452300738]
Deep neural networks achieve high prediction accuracy when the train and test distributions coincide.
Various types of corruptions occur which deviate from this setup and cause severe performance degradations.
We propose a diverse Gaussian noise consistency regularization method for improving robustness of image classifiers under a variety of corruptions.
arXiv Detail & Related papers (2021-04-02T20:25:53Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.