On Emergent Communication in Competitive Multi-Agent Teams
- URL: http://arxiv.org/abs/2003.01848v2
- Date: Thu, 16 Jul 2020 04:15:59 GMT
- Title: On Emergent Communication in Competitive Multi-Agent Teams
- Authors: Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe
Morency, Satwik Kottur
- Abstract summary: We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
- Score: 116.95067289206919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several recent works have found the emergence of grounded compositional
language in the communication protocols developed by mostly cooperative
multi-agent systems when learned end-to-end to maximize performance on a
downstream task. However, human populations learn to solve complex tasks
involving communicative behaviors not only in fully cooperative settings but
also in scenarios where competition acts as an additional external pressure for
improvement. In this work, we investigate whether competition for performance
from an external, similar agent team could act as a social influence that
encourages multi-agent populations to develop better communication protocols
for improved performance, compositionality, and convergence speed. We start
from Task & Talk, a previously proposed referential game between two
cooperative agents as our testbed and extend it into Task, Talk & Compete, a
game involving two competitive teams each consisting of two aforementioned
cooperative agents. Using this new setting, we provide an empirical study
demonstrating the impact of competitive influence on multi-agent teams. Our
results show that an external competitive influence leads to improved accuracy
and generalization, as well as faster emergence of communicative languages that
are more informative and compositional.
Related papers
- Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Improving Multi-Agent Debate with Sparse Communication Topology [9.041025703879905]
Multi-agent debate has proven effective in improving large language models quality for reasoning and factuality tasks.
In this paper, we investigate the effect of communication connectivity in multi-agent systems.
Our experiments on GPT and Mistral models reveal that multi-agent debates leveraging sparse communication topology can achieve comparable or superior performance.
arXiv Detail & Related papers (2024-06-17T17:33:09Z) - CompeteAI: Understanding the Competition Dynamics in Large Language Model-based Agents [43.46476421809271]
Large language models (LLMs) have been widely used as agents to complete different tasks.
We propose a general framework for studying the competition between agents.
We then implement a practical competitive environment using GPT-4 to simulate a virtual town.
arXiv Detail & Related papers (2023-10-26T16:06:20Z) - Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation [52.930183136111864]
We propose using scorable negotiation to evaluate Large Language Models (LLMs)
To reach an agreement, agents must have strong arithmetic, inference, exploration, and planning capabilities.
We provide procedures to create new games and increase games' difficulty to have an evolving benchmark.
arXiv Detail & Related papers (2023-09-29T13:33:06Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Mixed Cooperative-Competitive Communication Using Multi-Agent
Reinforcement Learning [0.0]
We apply differentiable inter-agent learning (DIAL) to a mixed cooperative-competitive setting.
We look at the difference in performance between communication that is private for a team and communication that can be overheard by the other team.
arXiv Detail & Related papers (2021-10-29T13:25:07Z) - Cooperative and Competitive Biases for Multi-Agent Reinforcement
Learning [12.676356746752893]
Training a multi-agent reinforcement learning (MARL) algorithm is more challenging than training a single-agent reinforcement learning algorithm.
We propose an algorithm that boosts MARL training using the biased action information of other agents based on a friend-or-foe concept.
We empirically demonstrate that our algorithm outperforms existing algorithms in various mixed cooperative-competitive environments.
arXiv Detail & Related papers (2021-01-18T05:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.