Double Targeted Universal Adversarial Perturbations
- URL: http://arxiv.org/abs/2010.03288v1
- Date: Wed, 7 Oct 2020 09:08:51 GMT
- Title: Double Targeted Universal Adversarial Perturbations
- Authors: Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In So Kweon
- Abstract summary: We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
- Score: 83.60161052867534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite their impressive performance, deep neural networks (DNNs) are widely
known to be vulnerable to adversarial attacks, which makes it challenging for
them to be deployed in security-sensitive applications, such as autonomous
driving. Image-dependent perturbations can fool a network for one specific
image, while universal adversarial perturbations are capable of fooling a
network for samples from all classes without selection. We introduce a double
targeted universal adversarial perturbations (DT-UAPs) to bridge the gap
between the instance-discriminative image-dependent perturbations and the
generic universal perturbations. This universal perturbation attacks one
targeted source class to sink class, while having a limited adversarial effect
on other non-targeted source classes, for avoiding raising suspicions.
Targeting the source and sink class simultaneously, we term it double targeted
attack (DTA). This provides an attacker with the freedom to perform precise
attacks on a DNN model while raising little suspicion. We show the
effectiveness of the proposed DTA algorithm on a wide range of datasets and
also demonstrate its potential as a physical attack.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Real-time Detection of Practical Universal Adversarial Perturbations [3.806971160251168]
Universal Adversarial Perturbations (UAPs) enable physically realizable and robust attacks against Deep Neural Networks (DNNs)
In this paper we propose HyperNeuron, an efficient and scalable algorithm that allows for the real-time detection of UAPs.
arXiv Detail & Related papers (2021-05-16T03:01:29Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - A Survey On Universal Adversarial Attack [68.1815935074054]
Deep neural networks (DNNs) have demonstrated remarkable performance for various applications.
They are widely known to be vulnerable to the attack of adversarial perturbations.
Universal adversarial perturbations (UAPs) fool the target DNN for most images.
arXiv Detail & Related papers (2021-03-02T06:35:09Z) - Generalizing Universal Adversarial Attacks Beyond Additive Perturbations [8.72462752199025]
We show that a universal adversarial attack can also be achieved via non-additive perturbation.
We propose a novel unified yet flexible framework for universal adversarial attacks, called GUAP.
Experiments are conducted on CIFAR-10 and ImageNet datasets with six deep neural network models.
arXiv Detail & Related papers (2020-10-15T14:25:58Z) - CD-UAP: Class Discriminative Universal Adversarial Perturbation [83.60161052867534]
A single universal adversarial perturbation (UAP) can be added to all natural images to change most of their predicted class labels.
We propose a new universal attack method to generate a single perturbation that fools a target network to misclassify only a chosen group of classes.
arXiv Detail & Related papers (2020-10-07T09:26:42Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.