WaveTransform: Crafting Adversarial Examples via Input Decomposition
- URL: http://arxiv.org/abs/2010.15773v1
- Date: Thu, 29 Oct 2020 17:16:59 GMT
- Title: WaveTransform: Crafting Adversarial Examples via Input Decomposition
- Authors: Divyam Anshumaan, Akshay Agarwal, Mayank Vatsa, and Richa Singh
- Abstract summary: We introduce WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination)
Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
- Score: 69.01794414018603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Frequency spectrum has played a significant role in learning unique and
discriminating features for object recognition. Both low and high frequency
information present in images have been extracted and learnt by a host of
representation learning techniques, including deep learning. Inspired by this
observation, we introduce a novel class of adversarial attacks, namely
`WaveTransform', that creates adversarial noise corresponding to low-frequency
and high-frequency subbands, separately (or in combination). The frequency
subbands are analyzed using wavelet decomposition; the subbands are corrupted
and then used to construct an adversarial example. Experiments are performed
using multiple databases and CNN models to establish the effectiveness of the
proposed WaveTransform attack and analyze the importance of a particular
frequency component. The robustness of the proposed attack is also evaluated
through its transferability and resiliency against a recent adversarial defense
algorithm. Experiments show that the proposed attack is effective against the
defense algorithm and is also transferable across CNNs.
Related papers
- Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial Attacks [33.743914380312226]
Adrial examples are a key method to exploit deep neural networks.
Recent frequency domain transformation has enhanced the transferability of such adversarial examples.
We propose a simple, effective, and scalable gradient-based adversarial attack algorithm.
arXiv Detail & Related papers (2024-08-22T18:24:08Z) - Towards a Novel Perspective on Adversarial Examples Driven by Frequency [7.846634028066389]
We propose a black-box adversarial attack algorithm based on combining different frequency bands.
Experiments conducted on multiple datasets and models demonstrate that combining low-frequency bands and high-frequency components of low-frequency bands can significantly enhance attack efficiency.
arXiv Detail & Related papers (2024-04-16T00:58:46Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm [12.711880028935315]
convolutional neural networks (CNNs) have achieved success in computer vision tasks, but are vulnerable to backdoor attacks.
We propose a robust low-frequency black-box backdoor attack (LFBA), which minimally perturbs low-frequency components of frequency spectrum.
Experiments on real-world datasets verify the effectiveness and robustness of LFBA against image processing operations and the state-of-the-art backdoor defenses.
arXiv Detail & Related papers (2024-02-23T23:36:36Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Wavelet Regularization Benefits Adversarial Training [11.157873822750561]
We propose a wavelet regularization method based on the Haar wavelet decomposition which is named Wavelet Average Pooling.
On the datasets of CIFAR-10 and CIFAR-100, our proposed Adversarial Wavelet Training method realizes considerable robustness under different types of attacks.
arXiv Detail & Related papers (2022-06-08T08:00:30Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - WaveFill: A Wavelet-based Generation Network for Image Inpainting [57.012173791320855]
WaveFill is a wavelet-based inpainting network that decomposes images into multiple frequency bands.
WaveFill decomposes images by using discrete wavelet transform (DWT) that preserves spatial information naturally.
It applies L1 reconstruction loss to the low-frequency bands and adversarial loss to high-frequency bands, hence effectively mitigate inter-frequency conflicts.
arXiv Detail & Related papers (2021-07-23T04:44:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.