A Survey On Universal Adversarial Attack
- URL: http://arxiv.org/abs/2103.01498v1
- Date: Tue, 2 Mar 2021 06:35:09 GMT
- Title: A Survey On Universal Adversarial Attack
- Authors: Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, In
So Kweon
- Abstract summary: Deep neural networks (DNNs) have demonstrated remarkable performance for various applications.
They are widely known to be vulnerable to the attack of adversarial perturbations.
Universal adversarial perturbations (UAPs) fool the target DNN for most images.
- Score: 68.1815935074054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have demonstrated remarkable performance for
various applications, meanwhile, they are widely known to be vulnerable to the
attack of adversarial perturbations. This intriguing phenomenon has attracted
significant attention in machine learning and what might be more surprising to
the community is the existence of universal adversarial perturbations (UAPs),
i.e. a single perturbation to fool the target DNN for most images. The
advantage of UAP is that it can be generated beforehand and then be applied
on-the-fly during the attack. With the focus on UAP against deep classifiers,
this survey summarizes the recent progress on universal adversarial attacks,
discussing the challenges from both the attack and defense sides, as well as
the reason for the existence of UAP. Additionally, universal attacks in a wide
range of applications beyond deep classification are also covered.
Related papers
- Joint Universal Adversarial Perturbations with Interpretations [19.140429650679593]
In this paper, we propose a novel attacking framework to generate joint universal adversarial perturbations (JUAP)
To the best of our knowledge, this is the first effort to study UAP for jointly attacking both DNNs and interpretations.
arXiv Detail & Related papers (2024-08-03T08:58:04Z) - Universal Adversarial Attacks on Neural Networks for Power Allocation in
a Massive MIMO System [60.46526086158021]
We propose universal adversarial perturbation (UAP)-crafting methods as white-box and black-box attacks.
We show that the adversarial success rate can achieve up to 60% and 40%, respectively.
The proposed UAP-based attacks make a more practical and realistic approach as compared to classical white-box attacks.
arXiv Detail & Related papers (2021-10-10T08:21:03Z) - Real-time Detection of Practical Universal Adversarial Perturbations [3.806971160251168]
Universal Adversarial Perturbations (UAPs) enable physically realizable and robust attacks against Deep Neural Networks (DNNs)
In this paper we propose HyperNeuron, an efficient and scalable algorithm that allows for the real-time detection of UAPs.
arXiv Detail & Related papers (2021-05-16T03:01:29Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - Generalizing Universal Adversarial Attacks Beyond Additive Perturbations [8.72462752199025]
We show that a universal adversarial attack can also be achieved via non-additive perturbation.
We propose a novel unified yet flexible framework for universal adversarial attacks, called GUAP.
Experiments are conducted on CIFAR-10 and ImageNet datasets with six deep neural network models.
arXiv Detail & Related papers (2020-10-15T14:25:58Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.