Pow-Wow: A Dataset and Study on Collaborative Communication in Pommerman
- URL: http://arxiv.org/abs/2009.05940v1
- Date: Sun, 13 Sep 2020 07:11:37 GMT
- Title: Pow-Wow: A Dataset and Study on Collaborative Communication in Pommerman
- Authors: Takuma Yoneda, Matthew R. Walter, Jason Naradowsky
- Abstract summary: In multi-agent learning, agents must coordinate with each other in order to succeed. For humans, this coordination is typically accomplished through the use of language.
We construct Pow-Wow, a new dataset for studying situated goal-directed human communication.
We analyze the types of communications which result in effective game strategies, annotate them accordingly, and present corpus-level statistical analysis of how trends in communications affect game outcomes.
- Score: 12.498028338281625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multi-agent learning, agents must coordinate with each other in order to
succeed. For humans, this coordination is typically accomplished through the
use of language. In this work we perform a controlled study of human language
use in a competitive team-based game, and search for useful lessons for
structuring communication protocol between autonomous agents. We construct
Pow-Wow, a new dataset for studying situated goal-directed human communication.
Using the Pommerman game environment, we enlisted teams of humans to play
against teams of AI agents, recording their observations, actions, and
communications. We analyze the types of communications which result in
effective game strategies, annotate them accordingly, and present corpus-level
statistical analysis of how trends in communications affect game outcomes.
Based on this analysis, we design a communication policy for learning agents,
and show that agents which utilize communication achieve higher win-rates
against baseline systems than those which do not.
Related papers
- Learning to Coordinate without Communication under Incomplete Information [39.106914895158035]
We show how an autonomous agent can learn to cooperate by interpreting its partner's actions.
Experimental results in a testbed called Gnomes at Night show that the learned no-communication coordination strategy achieves significantly higher success rates.
arXiv Detail & Related papers (2024-09-19T01:41:41Z) - Human-Agent Cooperation in Games under Incomplete Information through Natural Language Communication [32.655335061150566]
We introduce a shared-control game, where two players collectively control a token in alternating turns to achieve a common objective under incomplete information.
We formulate a policy synthesis problem for an autonomous agent in this game with a human as the other player.
We propose a communication-based approach comprising a language module and a planning module.
arXiv Detail & Related papers (2024-05-23T04:58:42Z) - GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment [72.96949760114575]
We propose a novel cooperative communication framework, Goal-Oriented Mental Alignment (GOMA)
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between parts of agents' mental states that are relevant to the goals.
We evaluate our approach against strong baselines in two challenging environments, Overcooked (a multiplayer game) and VirtualHome (a household simulator)
arXiv Detail & Related papers (2024-03-17T03:52:52Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Preference Communication in Multi-Objective Normal-Form Games [3.8099752264464883]
We study the problem of multiple agents learning concurrently in a multi-objective environment.
We introduce four novel preference communication protocols for both cooperative and self-interested communication.
We find that preference communication can drastically alter the learning process and lead to the emergence of cyclic Nash equilibria.
arXiv Detail & Related papers (2021-11-17T15:30:41Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Learning to cooperate: Emergent communication in multi-agent navigation [49.11609702016523]
We show that agents performing a cooperative navigation task learn an interpretable communication protocol.
An analysis of the agents' policies reveals that emergent signals spatially cluster the state space.
Using populations of agents, we show that the emergent protocol has basic compositional structure.
arXiv Detail & Related papers (2020-04-02T16:03:17Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.