Mixed Cooperative-Competitive Communication Using Multi-Agent
Reinforcement Learning
- URL: http://arxiv.org/abs/2110.15762v1
- Date: Fri, 29 Oct 2021 13:25:07 GMT
- Title: Mixed Cooperative-Competitive Communication Using Multi-Agent
Reinforcement Learning
- Authors: Astrid Vanneste, Wesley Van Wijnsberghe, Simon Vanneste, Kevin Mets,
Siegfried Mercelis, Steven Latr\'e, Peter Hellinckx
- Abstract summary: We apply differentiable inter-agent learning (DIAL) to a mixed cooperative-competitive setting.
We look at the difference in performance between communication that is private for a team and communication that can be overheard by the other team.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By using communication between multiple agents in multi-agent environments,
one can reduce the effects of partial observability by combining one agent's
observation with that of others in the same dynamic environment. While a lot of
successful research has been done towards communication learning in cooperative
settings, communication learning in mixed cooperative-competitive settings is
also important and brings its own complexities such as the opposing team
overhearing the communication. In this paper, we apply differentiable
inter-agent learning (DIAL), designed for cooperative settings, to a mixed
cooperative-competitive setting. We look at the difference in performance
between communication that is private for a team and communication that can be
overheard by the other team. Our research shows that communicating agents are
able to achieve similar performance to fully observable agents after a given
training period in our chosen environment. Overall, we find that sharing
communication across teams results in decreased performance for the
communicating team in comparison to results achieved with private
communication.
Related papers
- Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - T2MAC: Targeted and Trusted Multi-Agent Communication through Selective
Engagement and Evidence-Driven Integration [15.91335141803629]
We propose Targeted and Trusted Multi-Agent Communication (T2MAC) to help agents learn selective engagement and evidence-driven integration.
T2MAC enables agents to craft individualized messages, pinpoint ideal communication windows, and engage with reliable partners.
We evaluate our method on a diverse set of cooperative multi-agent tasks, with varying difficulties, involving different scales.
arXiv Detail & Related papers (2024-01-19T18:00:33Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Provably Efficient Cooperative Multi-Agent Reinforcement Learning with
Function Approximation [15.411902255359074]
We show that it is possible to achieve near-optimal no-regret learning even with a fixed constant communication budget.
Our work generalizes several ideas from the multi-agent contextual and multi-armed bandit literature to MDPs and reinforcement learning.
arXiv Detail & Related papers (2021-03-08T18:51:00Z) - Emergent Communication under Competition [10.926117869188651]
We introduce a modified sender-receiver game to study the spectrum of partially-competitive scenarios.
We show that communication is proportional to cooperation and it can occur for partially competitive scenarios.
arXiv Detail & Related papers (2021-01-25T17:58:22Z) - Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent
Populations [59.608216900601384]
We study agents that learn to communicate via actuating their joints in a 3D environment.
We show that under realistic assumptions, a non-uniform distribution of intents and a common-knowledge energy cost, these agents can find protocols that generalize to novel partners.
arXiv Detail & Related papers (2020-10-29T19:23:10Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.