Generating Adversarial Examples with Task Oriented Multi-Objective
Optimization
- URL: http://arxiv.org/abs/2304.13229v2
- Date: Thu, 1 Jun 2023 18:01:31 GMT
- Title: Generating Adversarial Examples with Task Oriented Multi-Objective
Optimization
- Authors: Anh Bui, Trung Le, He Zhao, Quan Tran, Paul Montague, Dinh Phung
- Abstract summary: Adversarial training is one of the most efficient methods to improve the model's robustness.
We propose emphTask Oriented MOO to address this issue.
Our principle is to only maintain the goal-achieved tasks, while letting the spend more effort on improving the goal-unachieved tasks.
- Score: 21.220906842166425
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning models, even the-state-of-the-art ones, are highly vulnerable
to adversarial examples. Adversarial training is one of the most efficient
methods to improve the model's robustness. The key factor for the success of
adversarial training is the capability to generate qualified and divergent
adversarial examples which satisfy some objectives/goals (e.g., finding
adversarial examples that maximize the model losses for simultaneously
attacking multiple models). Therefore, multi-objective optimization (MOO) is a
natural tool for adversarial example generation to achieve multiple
objectives/goals simultaneously. However, we observe that a naive application
of MOO tends to maximize all objectives/goals equally, without caring if an
objective/goal has been achieved yet. This leads to useless effort to further
improve the goal-achieved tasks, while putting less focus on the
goal-unachieved tasks. In this paper, we propose \emph{Task Oriented MOO} to
address this issue, in the context where we can explicitly define the goal
achievement for a task. Our principle is to only maintain the goal-achieved
tasks, while letting the optimizer spend more effort on improving the
goal-unachieved tasks. We conduct comprehensive experiments for our Task
Oriented MOO on various adversarial example generation schemes. The
experimental results firmly demonstrate the merit of our proposed approach. Our
code is available at \url{https://github.com/tuananhbui89/TAMOO}.
Related papers
- Zero-Shot Offline Imitation Learning via Optimal Transport [21.548195072895517]
Zero-shot imitation learning algorithms reproduce unseen behavior from as little as a single demonstration at test time.
Existing practical approaches view the expert demonstration as a sequence of goals, enabling imitation with a high-level goal selector, and a low-level goal-conditioned policy.
We introduce a novel method that mitigates this issue by directly optimizing the occupancy matching objective that is intrinsic to imitation learning.
arXiv Detail & Related papers (2024-10-11T12:10:51Z) - ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning [59.08197876733052]
Auxiliary-Task Learning (ATL) aims to improve the performance of the target task by leveraging the knowledge obtained from related tasks.
Sometimes, learning multiple tasks simultaneously results in lower accuracy than learning only the target task, known as negative transfer.
ForkMerge is a novel approach that periodically forks the model into multiple branches, automatically searches the varying task weights.
arXiv Detail & Related papers (2023-01-30T02:27:02Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Stein Variational Goal Generation for adaptive Exploration in Multi-Goal
Reinforcement Learning [18.62133925594957]
In multi-goal Reinforcement Learning, an agent can share experience between related training tasks, resulting in better generalization at test time.
In this work we propose Stein Variational Goal Generation (SVGG), which samples goals of intermediate difficulty for the agent.
The distribution of goals is modeled with particles that are attracted in areas of appropriate difficulty using Stein Variational Gradient Descent.
arXiv Detail & Related papers (2022-06-14T10:03:17Z) - Generative multitask learning mitigates target-causing confounding [61.21582323566118]
We propose a simple and scalable approach to causal representation learning for multitask learning.
The improvement comes from mitigating unobserved confounders that cause the targets, but not the input.
Our results on the Attributes of People and Taskonomy datasets reflect the conceptual improvement in robustness to prior probability shift.
arXiv Detail & Related papers (2022-02-08T20:42:14Z) - Conflict-Averse Gradient Descent for Multi-task Learning [56.379937772617]
A major challenge in optimizing a multi-task model is the conflicting gradients.
We introduce Conflict-Averse Gradient descent (CAGrad) which minimizes the average loss function.
CAGrad balances the objectives automatically and still provably converges to a minimum over the average loss.
arXiv Detail & Related papers (2021-10-26T22:03:51Z) - Adversarial Intrinsic Motivation for Reinforcement Learning [60.322878138199364]
We investigate whether the Wasserstein-1 distance between a policy's state visitation distribution and a target distribution can be utilized effectively for reinforcement learning tasks.
Our approach, termed Adversarial Intrinsic Motivation (AIM), estimates this Wasserstein-1 distance through its dual objective and uses it to compute a supplemental reward function.
arXiv Detail & Related papers (2021-05-27T17:51:34Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.