Multi-Task Adversarial Attack
- URL: http://arxiv.org/abs/2011.09824v1
- Date: Thu, 19 Nov 2020 13:56:58 GMT
- Title: Multi-Task Adversarial Attack
- Authors: Pengxin Guo, Yuancheng Xu, Baijiong Lin, Yu Zhang
- Abstract summary: Multi-Task adversarial Attack (MTA) is a unified framework that can craft adversarial examples for multiple tasks efficiently.
MTA uses a generator for adversarial perturbations which consists of a shared encoder for all tasks and multiple task-specific decoders.
Thanks to the shared encoder, MTA reduces the storage cost and speeds up the inference when attacking multiple tasks simultaneously.
- Score: 3.412750324146571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have achieved impressive performance in various areas,
but they are shown to be vulnerable to adversarial attacks. Previous works on
adversarial attacks mainly focused on the single-task setting. However, in real
applications, it is often desirable to attack several models for different
tasks simultaneously. To this end, we propose Multi-Task adversarial Attack
(MTA), a unified framework that can craft adversarial examples for multiple
tasks efficiently by leveraging shared knowledge among tasks, which helps
enable large-scale applications of adversarial attacks on real-world systems.
More specifically, MTA uses a generator for adversarial perturbations which
consists of a shared encoder for all tasks and multiple task-specific decoders.
Thanks to the shared encoder, MTA reduces the storage cost and speeds up the
inference when attacking multiple tasks simultaneously. Moreover, the proposed
framework can be used to generate per-instance and universal perturbations for
targeted and non-targeted attacks. Experimental results on the Office-31 and
NYUv2 datasets demonstrate that MTA can improve the quality of attacks when
compared with its single-task counterpart.
Related papers
- Derail Yourself: Multi-turn LLM Jailbreak Attack through Self-discovered Clues [88.96201324719205]
This study exposes the safety vulnerabilities of Large Language Models (LLMs) in multi-turn interactions.
We introduce ActorAttack, a novel multi-turn attack method inspired by actor-network theory.
arXiv Detail & Related papers (2024-10-14T16:41:49Z) - A Multi-task Adversarial Attack Against Face Authentication [16.86448076317697]
We propose a multi-task adversarial attack algorithm called MTADV that are adaptable for multiple users or systems.
MTADV is effective against various face datasets, including LFW, CelebA, and CelebA-HQ.
arXiv Detail & Related papers (2024-08-15T15:13:22Z) - Cross-Task Attack: A Self-Supervision Generative Framework Based on Attention Shift [3.6015992701968793]
We propose a self-supervised Cross-Task Attack framework (CTA)
CTA generates cross-task perturbations by shifting the attention area of samples away from the co-attention map and closer to the anti-attention map.
We conduct extensive experiments on multiple vision tasks and the experimental results confirm the effectiveness of the proposed design for adversarial attacks.
arXiv Detail & Related papers (2024-07-18T17:01:10Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - MaskMA: Towards Zero-Shot Multi-Agent Decision Making with Mask-Based
Collaborative Learning [56.00558959816801]
We propose a Mask-Based collaborative learning framework for Multi-Agent decision making (MaskMA)
We show MaskMA can achieve an impressive 77.8% average zero-shot win rate on 60 unseen test maps by decentralized execution.
arXiv Detail & Related papers (2023-10-18T09:53:27Z) - Multi-Task Models Adversarial Attacks [25.834775498006657]
Multi-Task Learning involves developing a singular model, known as a multi-task model, to concurrently perform multiple tasks.
The security of single-task models has been thoroughly studied, but multi-task models pose several critical security questions.
This paper addresses these queries through detailed analysis and rigorous experimentation.
arXiv Detail & Related papers (2023-05-20T03:07:43Z) - Scalable Attribution of Adversarial Attacks via Multi-Task Learning [11.302242821058865]
Adversarial Attribution Problem (AAP) is used to generate adversarial examples.
We propose a multi-task learning framework named Multi-Task Adversarial Attribution (MTAA) to recognize the three signatures simultaneously.
arXiv Detail & Related papers (2023-02-25T12:27:44Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.