Adversarial amplitude swap towards robust image classifiers
- URL: http://arxiv.org/abs/2203.07138v2
- Date: Tue, 15 Mar 2022 01:32:29 GMT
- Title: Adversarial amplitude swap towards robust image classifiers
- Authors: Tan Chun Yang, Hiroshi Kera, Kazuhiko Kawamoto
- Abstract summary: We investigate the effect of the amplitude and phase spectra of adversarial images on the robustness of CNN classifiers.
Experiments revealed that the images generated by combining the amplitude spectrum of adversarial images and the phase spectrum of clean images accommodates moderate and general perturbations.
- Score: 3.480626767752489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vulnerability of convolutional neural networks (CNNs) to image
perturbations such as common corruptions and adversarial perturbations has
recently been investigated from the perspective of frequency. In this study, we
investigate the effect of the amplitude and phase spectra of adversarial images
on the robustness of CNN classifiers. Extensive experiments revealed that the
images generated by combining the amplitude spectrum of adversarial images and
the phase spectrum of clean images accommodates moderate and general
perturbations, and training with these images equips a CNN classifier with more
general robustness, performing well under both common corruptions and
adversarial perturbations. We also found that two types of overfitting
(catastrophic overfitting and robust overfitting) can be circumvented by the
aforementioned spectrum recombination. We believe that these results contribute
to the understanding and the training of truly robust classifiers.
Related papers
- Forgery-aware Adaptive Transformer for Generalizable Synthetic Image
Detection [106.39544368711427]
We study the problem of generalizable synthetic image detection, aiming to detect forgery images from diverse generative methods.
We present a novel forgery-aware adaptive transformer approach, namely FatFormer.
Our approach tuned on 4-class ProGAN data attains an average of 98% accuracy to unseen GANs, and surprisingly generalizes to unseen diffusion models with 95% accuracy.
arXiv Detail & Related papers (2023-12-27T17:36:32Z) - Exposing Image Splicing Traces in Scientific Publications via Uncertainty-guided Refinement [30.698359275889363]
A surge in scientific publications suspected of image manipulation has led to numerous retractions.
Image splicing detection is more challenging due to the lack of reference images and the typically small tampered areas.
We propose an Uncertainty-guided Refinement Network (URN) to mitigate the impact of disruptive factors.
arXiv Detail & Related papers (2023-09-28T12:36:12Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Exploiting Frequency Spectrum of Adversarial Images for General
Robustness [3.480626767752489]
Adversarial training with an emphasis on phase components significantly improves model performance on clean, adversarial, and common corruption accuracies.
We propose a frequency-based data augmentation method, Adversarial Amplitude Swap, that swaps the amplitude spectrum between clean and adversarial images.
These images act as substitutes for adversarial images and can be implemented in various adversarial training setups.
arXiv Detail & Related papers (2023-05-15T08:36:32Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Amplitude-Phase Recombination: Rethinking Robustness of Convolutional
Neural Networks in Frequency Domain [31.182376196295365]
CNN tends to converge at the local optimum which is closely related to the high-frequency components of the training images.
A new perspective on data augmentation designed by re-combing the phase spectrum of the current image and the amplitude spectrum of the distracter image.
arXiv Detail & Related papers (2021-08-19T04:04:41Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.