Frequency-Tuned Universal Adversarial Attacks
- URL: http://arxiv.org/abs/2003.05549v2
- Date: Tue, 9 Jun 2020 18:37:09 GMT
- Title: Frequency-Tuned Universal Adversarial Attacks
- Authors: Yingpeng Deng and Lina J. Karam
- Abstract summary: We propose a frequency-tuned universal attack method to compute universal perturbations.
We show that our method can realize a good balance between perceivability and effectiveness in terms of fooling rate.
- Score: 19.79803434998116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Researchers have shown that the predictions of a convolutional neural network
(CNN) for an image set can be severely distorted by one single image-agnostic
perturbation, or universal perturbation, usually with an empirically fixed
threshold in the spatial domain to restrict its perceivability. However, by
considering the human perception, we propose to adopt JND thresholds to guide
the perceivability of universal adversarial perturbations. Based on this, we
propose a frequency-tuned universal attack method to compute universal
perturbations and show that our method can realize a good balance between
perceivability and effectiveness in terms of fooling rate by adapting the
perturbations to the local frequency content. Compared with existing universal
adversarial attack techniques, our frequency-tuned attack method can achieve
cutting-edge quantitative results. We demonstrate that our approach can
significantly improve the performance of the baseline on both white-box and
black-box attacks.
Related papers
- Towards Transferable Adversarial Attacks with Centralized Perturbation [4.689122927344728]
Adversa transferability enables black-box attacks on unknown victim deep neural networks (DNNs)
Current transferable attacks create adversarial perturbation over the entire image, resulting in excessive noise that overfit the source model.
We propose a transferable adversarial attack with fine-grained perturbation optimization in the frequency domain, creating centralized perturbation.
arXiv Detail & Related papers (2023-12-11T08:25:50Z) - Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - A Perceptual Distortion Reduction Framework for Adversarial Perturbation
Generation [58.6157191438473]
We propose a perceptual distortion reduction framework to tackle this problem from two perspectives.
We propose a perceptual distortion constraint and add it into the objective function of adversarial attack to jointly optimize the perceptual distortions and attack success rate.
arXiv Detail & Related papers (2021-05-01T15:08:10Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - Towards Imperceptible Universal Attacks on Texture Recognition [19.79803434998116]
We show that limiting the perturbation's $l_p$ norm in the spatial domain may not be a suitable way to restrict the perceptibility of universal adversarial perturbations for texture images.
We propose a frequency-tuned universal attack method to compute universal perturbations in the frequency domain.
arXiv Detail & Related papers (2020-11-24T08:33:59Z) - Generalizing Universal Adversarial Attacks Beyond Additive Perturbations [8.72462752199025]
We show that a universal adversarial attack can also be achieved via non-additive perturbation.
We propose a novel unified yet flexible framework for universal adversarial attacks, called GUAP.
Experiments are conducted on CIFAR-10 and ImageNet datasets with six deep neural network models.
arXiv Detail & Related papers (2020-10-15T14:25:58Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.