Human-Agent Cooperation in Games under Incomplete Information through Natural Language Communication
- URL: http://arxiv.org/abs/2405.14173v3
- Date: Sat, 1 Jun 2024 20:06:55 GMT
- Title: Human-Agent Cooperation in Games under Incomplete Information through Natural Language Communication
- Authors: Shenghui Chen, Daniel Fried, Ufuk Topcu,
- Abstract summary: We introduce a shared-control game, where two players collectively control a token in alternating turns to achieve a common objective under incomplete information.
We formulate a policy synthesis problem for an autonomous agent in this game with a human as the other player.
We propose a communication-based approach comprising a language module and a planning module.
- Score: 32.655335061150566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing autonomous agents that can strategize and cooperate with humans under information asymmetry is challenging without effective communication in natural language. We introduce a shared-control game, where two players collectively control a token in alternating turns to achieve a common objective under incomplete information. We formulate a policy synthesis problem for an autonomous agent in this game with a human as the other player. To solve this problem, we propose a communication-based approach comprising a language module and a planning module. The language module translates natural language messages into and from a finite set of flags, a compact representation defined to capture player intents. The planning module leverages these flags to compute a policy using an asymmetric information-set Monte Carlo tree search with flag exchange algorithm we present. We evaluate the effectiveness of this approach in a testbed based on Gnomes at Night, a search-and-find maze board game. Results of human subject experiments show that communication narrows the information gap between players and enhances human-agent cooperation efficiency with fewer turns.
Related papers
- Learning to Coordinate without Communication under Incomplete Information [39.106914895158035]
We show how an autonomous agent can learn to cooperate by interpreting its partner's actions.
Experimental results in a testbed called Gnomes at Night show that the learned no-communication coordination strategy achieves significantly higher success rates.
arXiv Detail & Related papers (2024-09-19T01:41:41Z) - GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment [72.96949760114575]
We propose a novel cooperative communication framework, Goal-Oriented Mental Alignment (GOMA)
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between parts of agents' mental states that are relevant to the goals.
We evaluate our approach against strong baselines in two challenging environments, Overcooked (a multiplayer game) and VirtualHome (a household simulator)
arXiv Detail & Related papers (2024-03-17T03:52:52Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Learning to Infer Belief Embedded Communication [9.862909791015237]
This paper introduces a novel algorithm to mimic an agent's language learning ability.
It contains a perception module for decoding other agents' intentions in response to their past actions.
It also includes a language generation module for learning implicit grammar during communication with two or more agents.
arXiv Detail & Related papers (2022-03-15T12:42:10Z) - Toward Collaborative Reinforcement Learning Agents that Communicate
Through Text-Based Natural Language [4.289574109162585]
This paper considers text-based natural language as a novel form of communication between agents trained with reinforcement learning.
Inspired by the game of Blind Leads, we propose an environment where one agent uses natural language instructions to guide another through a maze.
arXiv Detail & Related papers (2021-07-20T09:19:29Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Pow-Wow: A Dataset and Study on Collaborative Communication in Pommerman [12.498028338281625]
In multi-agent learning, agents must coordinate with each other in order to succeed. For humans, this coordination is typically accomplished through the use of language.
We construct Pow-Wow, a new dataset for studying situated goal-directed human communication.
We analyze the types of communications which result in effective game strategies, annotate them accordingly, and present corpus-level statistical analysis of how trends in communications affect game outcomes.
arXiv Detail & Related papers (2020-09-13T07:11:37Z) - Multi-agent Communication meets Natural Language: Synergies between
Functional and Structural Language Learning [16.776753238108036]
We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning.
Our starting point is a language model that has been trained on generic, not task-specific language data.
We then place this model in a multi-agent self-play environment that generates task-specific rewards used to adapt or modulate the model.
arXiv Detail & Related papers (2020-05-14T15:32:23Z) - On the interaction between supervision and self-play in emergent
communication [82.290338507106]
We investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency.
We find that first training agents via supervised learning on human data followed by self-play outperforms the converse.
arXiv Detail & Related papers (2020-02-04T02:35:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.