Adversarial Attacks On Multi-Agent Communication
- URL: http://arxiv.org/abs/2101.06560v1
- Date: Sun, 17 Jan 2021 00:35:26 GMT
- Title: Adversarial Attacks On Multi-Agent Communication
- Authors: James Tu, Tsunhsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye
Ren, Raquel Urtasun
- Abstract summary: Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
- Score: 80.4392160849506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Growing at a very fast pace, modern autonomous systems will soon be deployed
at scale, opening up the possibility for cooperative multi-agent systems. By
sharing information and distributing workloads, autonomous agents can better
perform their tasks and enjoy improved computation efficiency. However, such
advantages rely heavily on communication channels which have been shown to be
vulnerable to security breaches. Thus, communication can be compromised to
execute adversarial attacks on deep learning models which are widely employed
in modern systems. In this paper, we explore such adversarial attacks in a
novel multi-agent setting where agents communicate by sharing learned
intermediate representations. We observe that an indistinguishable adversarial
message can severely degrade performance, but becomes weaker as the number of
benign agents increase. Furthermore, we show that transfer attacks are more
difficult in this setting when compared to directly perturbing the inputs, as
it is necessary to align the distribution of communication messages with domain
adaptation. Finally, we show that low-budget online attacks can be achieved by
exploiting the temporal consistency of streaming sensory inputs.
Related papers
- Will 6G be Semantic Communications? Opportunities and Challenges from
Task Oriented and Secure Communications to Integrated Sensing [49.83882366499547]
This paper explores opportunities and challenges of task (goal)-oriented and semantic communications for next-generation (NextG) networks through the integration of multi-task learning.
We employ deep neural networks representing a dedicated encoder at the transmitter and multiple task-specific decoders at the receiver.
We scrutinize potential vulnerabilities stemming from adversarial attacks during both training and testing phases.
arXiv Detail & Related papers (2024-01-03T04:01:20Z) - Learning to Cooperate and Communicate Over Imperfect Channels [27.241873614561538]
We consider a cooperative multi-agent system where the agents act and exchange information in a decentralized manner using a limited and unreliable channel.
Our method allows agents to dynamically adapt how much information to share by sending messages of different sizes.
We show that our approach outperforms approaches without adaptive capabilities in a novel cooperative digit-prediction environment.
arXiv Detail & Related papers (2023-11-24T12:15:48Z) - Communication-Robust Multi-Agent Learning by Adaptable Auxiliary
Multi-Agent Adversary Generation [8.376257490773192]
Communication can promote coordination in cooperative Multi-Agent Reinforcement Learning (MARL)
We propose an adaptable method of Multi-Agent Auxiliary Adversaries Generation for robust Communication, dubbed MA3C, to obtain a robust communication-based policy.
arXiv Detail & Related papers (2023-05-09T01:29:46Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems [51.6210785955659]
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored.
In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $CfracN-12$ agents to a victim agent.
arXiv Detail & Related papers (2022-06-21T07:32:18Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Gaussian Process Based Message Filtering for Robust Multi-Agent
Cooperation in the Presence of Adversarial Communication [5.161531917413708]
We consider the problem of providing robustness to adversarial communication in multi-agent systems.
We propose a communication architecture based on Graph Neural Networks (GNNs)
We show that our filtering method is able to reduce the impact that non-cooperative agents cause.
arXiv Detail & Related papers (2020-12-01T14:21:58Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.