Learning to Coordinate without Communication under Incomplete Information
- URL: http://arxiv.org/abs/2409.12397v1
- Date: Thu, 19 Sep 2024 01:41:41 GMT
- Title: Learning to Coordinate without Communication under Incomplete Information
- Authors: Shenghui Chen, Shufang Zhu, Giuseppe De Giacomo, Ufuk Topcu,
- Abstract summary: We show how an autonomous agent can learn to cooperate by interpreting its partner's actions.
Experimental results in a testbed called Gnomes at Night show that the learned no-communication coordination strategy achieves significantly higher success rates.
- Score: 39.106914895158035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Achieving seamless coordination in cooperative games is a crucial challenge in artificial intelligence, particularly when players operate under incomplete information. A common strategy to mitigate this information asymmetry involves leveraging explicit communication. However, direct communication is not always feasible due to factors such as transmission loss. We explore how effective coordination can be achieved without verbal communication, relying solely on observing each other's actions. We demonstrate how an autonomous agent can learn to cooperate by interpreting its partner's actions, which are used to hint at its intents. Our approach involves developing an agent strategy by constructing deterministic finite automata for each possible action and integrating them into a non-Markovian finite-state transducer. This transducer represents a non-deterministic strategy for the agent that suggests actions to assist its partner during gameplay. Experimental results in a testbed called Gnomes at Night show that the learned no-communication coordination strategy achieves significantly higher success rates and requires fewer steps to complete the game compared to uncoordinated scenarios, performing almost as well as an oracle baseline with direct communication.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment [72.96949760114575]
We propose a novel cooperative communication framework, Goal-Oriented Mental Alignment (GOMA)
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between parts of agents' mental states that are relevant to the goals.
We evaluate our approach against strong baselines in two challenging environments, Overcooked (a multiplayer game) and VirtualHome (a household simulator)
arXiv Detail & Related papers (2024-03-17T03:52:52Z) - Learning Communication Policies for Different Follower Behaviors in a
Collaborative Reference Game [22.28337771947361]
We evaluate the adaptability of neural artificial agents towards assumed partner behaviors in a collaborative reference game.
Our results indicate that this novel ingredient leads to communicative strategies that are less verbose.
arXiv Detail & Related papers (2024-02-07T13:22:17Z) - From Explicit Communication to Tacit Cooperation:A Novel Paradigm for
Cooperative MARL [14.935456456463731]
We propose a novel paradigm that facilitates a gradual shift from explicit communication to tacit cooperation.
In the initial training stage, we promote cooperation by sharing relevant information among agents.
We then combine the explicitly communicated information with the reconstructed information to obtain mixed information.
arXiv Detail & Related papers (2023-04-28T06:56:07Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - The Enforcers: Consistent Sparse-Discrete Methods for Constraining
Informative Emergent Communication [5.432350993419402]
Communication enables agents to cooperate to achieve their goals.
Recent work in learning sparse communication suffers from high variance training where, the price of decreasing communication is a decrease in reward, particularly in cooperative tasks.
This research addresses the above issues by limiting the loss in reward of decreasing communication and eliminating the penalty for discretization.
arXiv Detail & Related papers (2022-01-19T07:31:06Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Curriculum-Driven Multi-Agent Learning and the Role of Implicit
Communication in Teamwork [24.92668968807012]
We propose a curriculum-driven learning strategy for solving difficult multi-agent coordination tasks.
We argue that emergent implicit communication plays a large role in enabling superior levels of coordination.
arXiv Detail & Related papers (2021-06-21T14:54:07Z) - Pow-Wow: A Dataset and Study on Collaborative Communication in Pommerman [12.498028338281625]
In multi-agent learning, agents must coordinate with each other in order to succeed. For humans, this coordination is typically accomplished through the use of language.
We construct Pow-Wow, a new dataset for studying situated goal-directed human communication.
We analyze the types of communications which result in effective game strategies, annotate them accordingly, and present corpus-level statistical analysis of how trends in communications affect game outcomes.
arXiv Detail & Related papers (2020-09-13T07:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.