Multi-Task Models Adversarial Attacks
- URL: http://arxiv.org/abs/2305.12066v3
- Date: Wed, 27 Dec 2023 21:57:15 GMT
- Title: Multi-Task Models Adversarial Attacks
- Authors: Lijun Zhang, Xiao Liu, Kaleel Mahmood, Caiwen Ding, Hui Guan
- Abstract summary: Multi-Task Learning involves developing a singular model, known as a multi-task model, to concurrently perform multiple tasks.
The security of single-task models has been thoroughly studied, but multi-task models pose several critical security questions.
This paper addresses these queries through detailed analysis and rigorous experimentation.
- Score: 25.834775498006657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-Task Learning (MTL) involves developing a singular model, known as a
multi-task model, to concurrently perform multiple tasks. While the security of
single-task models has been thoroughly studied, multi-task models pose several
critical security questions, such as 1) their vulnerability to single-task
adversarial attacks, 2) the possibility of designing attacks that target
multiple tasks, and 3) the impact of task sharing and adversarial training on
their resilience to such attacks. This paper addresses these queries through
detailed analysis and rigorous experimentation. First, we explore the
adaptation of single-task white-box attacks to multi-task models and identify
their limitations. We then introduce a novel attack framework, the Gradient
Balancing Multi-Task Attack (GB-MTA), which treats attacking a multi-task model
as an optimization problem. This problem, based on averaged relative loss
change across tasks, is approximated as an integer linear programming problem.
Extensive evaluations on MTL benchmarks, NYUv2 and Tiny-Taxonomy, demonstrate
GB-MTA's effectiveness against both standard and adversarially trained
multi-task models. The results also highlight a trade-off between task accuracy
improvement via parameter sharing and increased model vulnerability due to
enhanced attack transferability.
Related papers
- Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for
Multi-task Mathematical Problem Solving [77.51817534090789]
We propose textbfJiuZhang2.0, a unified Chinese PLM specially for multi-task mathematical problem solving.
Our idea is to maintain a moderate-sized model and employ the emphcross-task knowledge sharing to improve the model capacity in a multi-task setting.
arXiv Detail & Related papers (2023-06-19T15:45:36Z) - OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist
Models [72.8156832931841]
Generalist models are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model.
We release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction.
arXiv Detail & Related papers (2022-12-08T17:07:09Z) - An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale
Multitask Learning Systems [4.675744559395732]
Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer.
State of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks.
We propose an evolutionary method that can generate a large scale multitask model and can support the dynamic and continuous addition of new tasks.
arXiv Detail & Related papers (2022-05-25T13:10:47Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Controllable Dynamic Multi-Task Architectures [92.74372912009127]
We propose a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints.
We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights.
arXiv Detail & Related papers (2022-03-28T17:56:40Z) - Multi-Task Adversarial Attack [3.412750324146571]
Multi-Task adversarial Attack (MTA) is a unified framework that can craft adversarial examples for multiple tasks efficiently.
MTA uses a generator for adversarial perturbations which consists of a shared encoder for all tasks and multiple task-specific decoders.
Thanks to the shared encoder, MTA reduces the storage cost and speeds up the inference when attacking multiple tasks simultaneously.
arXiv Detail & Related papers (2020-11-19T13:56:58Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.