Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems
- URL: http://arxiv.org/abs/2311.00859v1
- Date: Wed, 1 Nov 2023 21:28:02 GMT
- Title: Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems
- Authors: Ziqing Lu, Guanlin Liu, Lifeng Cai, Weiyu Xu
- Abstract summary: We formulate the problem of performing optimal adversarial agent-to-agent attacks using distributed attack agents.
We propose an optimal method integrating within-step static constrained attack-resource allocation optimization and between-step dynamic programming.
Our numerical results show that the proposed attacks can significantly reduce the rewards received by the attacked agents.
- Score: 6.69087470775851
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding optimal adversarial attack strategies is an important topic in
reinforcement learning and the Markov decision process. Previous studies
usually assume one all-knowing coordinator (attacker) for whom attacking
different recipient (victim) agents incurs uniform costs. However, in reality,
instead of using one limitless central attacker, the attacks often need to be
performed by distributed attack agents. We formulate the problem of performing
optimal adversarial agent-to-agent attacks using distributed attack agents, in
which we impose distinct cost constraints on each different attacker-victim
pair. We propose an optimal method integrating within-step static constrained
attack-resource allocation optimization and between-step dynamic programming to
achieve the optimal adversarial attack in a multi-agent system. Our numerical
results show that the proposed attacks can significantly reduce the rewards
received by the attacked agents.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Securing Recommender System via Cooperative Training [78.97620275467733]
We propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data.
Considering existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems.
We put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process.
arXiv Detail & Related papers (2024-01-23T12:07:20Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Adversarial Attacks on Cooperative Multi-agent Bandits [41.79235070291252]
We study adversarial attacks on CMA2B in both homogeneous and heterogeneous settings.
In the homogeneous setting, we propose attack strategies that convince all agents to select a particular target arm $T-o(T)$ times while incurring $o(T)$ attack costs in $T$ rounds.
In the heterogeneous setting, we prove that a target arm attack requires linear attack costs and propose attack strategies that can force a maximum number of agents to suffer linear regrets while incurring sublinear costs and manipulating only the observations of a few target agents.
arXiv Detail & Related papers (2023-11-03T04:03:19Z) - Efficient Adversarial Attacks on Online Multi-agent Reinforcement
Learning [45.408568528354216]
We investigate the impact of adversarial attacks on multi-agent reinforcement learning (MARL)
In the considered setup, there is an attacker who is able to modify the rewards before the agents receive them or manipulate the actions before the environment receives them.
We show that the mixed attack strategy can efficiently attack MARL agents even if the attacker has no prior information about the underlying environment and the agents' algorithms.
arXiv Detail & Related papers (2023-07-15T00:38:55Z) - Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
Adversarial Policies for Training-Time Attacks [21.97069271045167]
In targeted poisoning attacks, an attacker manipulates an agent-environment interaction to force the agent into adopting a policy of interest, called target policy.
We study targeted poisoning attacks in a two-agent setting where an attacker implicitly poisons the effective environment of one of the agents by modifying the policy of its peer.
We develop an optimization framework for designing optimal attacks, where the cost of the attack measures how much the solution deviates from the assumed default policy of the peer agent.
arXiv Detail & Related papers (2023-02-27T14:52:15Z) - Adversarial Attacks on Adversarial Bandits [10.891819703383408]
We show that the attacker is able to mislead any no-regret adversarial bandit algorithm into selecting a suboptimal target arm.
This result implies critical security concern in real-world bandit-based systems.
arXiv Detail & Related papers (2023-01-30T00:51:39Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.