Learning to Coordinate without Communication under Incomplete Information
- URL: http://arxiv.org/abs/2409.12397v2
- Date: Wed, 05 Feb 2025 19:00:07 GMT
- Title: Learning to Coordinate without Communication under Incomplete Information
- Authors: Shenghui Chen, Shufang Zhu, Giuseppe De Giacomo, Ufuk Topcu,
- Abstract summary: Experimental results in a Gnomes at Night testbed show that, even without direct communication, one can learn effective cooperation strategies.
Such strategies achieve significantly higher success rates and require fewer steps to complete the game compared to uncoordinated ones.
- Score: 39.106914895158035
- License:
- Abstract: Achieving seamless coordination in cooperative games is a crucial challenge in artificial intelligence, particularly when players operate under incomplete information. A common strategy to mitigate this information asymmetry involves leveraging explicit communication. However, direct (verbal) communication is not always feasible due to factors such as transmission loss. Leveraging the game Gnomes at Night, we explore how effective coordination can be achieved without verbal communication, relying solely on observing each other's actions. We demonstrate how an autonomous agent can learn to cooperate by interpreting its partner's sequences of actions, which are used to hint at its intents. Our approach generates a non-Markovian strategy for the agent by learning a deterministic finite automaton for each possible action and integrating these automata into a finite-state transducer. Experimental results in a Gnomes at Night testbed show that, even without direct communication, one can learn effective cooperation strategies. Such strategies achieve significantly higher success rates and require fewer steps to complete the game compared to uncoordinated ones, and perform almost as well as in the case direct communication is allowed.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment [72.96949760114575]
We propose a novel cooperative communication framework, Goal-Oriented Mental Alignment (GOMA)
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between parts of agents' mental states that are relevant to the goals.
We evaluate our approach against strong baselines in two challenging environments, Overcooked (a multiplayer game) and VirtualHome (a household simulator)
arXiv Detail & Related papers (2024-03-17T03:52:52Z) - Learning Communication Policies for Different Follower Behaviors in a
Collaborative Reference Game [22.28337771947361]
We evaluate the adaptability of neural artificial agents towards assumed partner behaviors in a collaborative reference game.
Our results indicate that this novel ingredient leads to communicative strategies that are less verbose.
arXiv Detail & Related papers (2024-02-07T13:22:17Z) - Inferring the Goals of Communicating Agents from Actions and
Instructions [47.5816320484482]
We introduce a model of a cooperative team where one agent, the principal, may communicate natural language instructions about their shared plan to another agent, the assistant.
We show how a third person observer can infer the team's goal via multi-modal inverse planning from actions and instructions.
We evaluate this approach by comparing it with human goal inferences in a multi-agent gridworld, finding that our model's inferences closely correlate with human judgments.
arXiv Detail & Related papers (2023-06-28T13:43:46Z) - From Explicit Communication to Tacit Cooperation:A Novel Paradigm for
Cooperative MARL [14.935456456463731]
We propose a novel paradigm that facilitates a gradual shift from explicit communication to tacit cooperation.
In the initial training stage, we promote cooperation by sharing relevant information among agents.
We then combine the explicitly communicated information with the reconstructed information to obtain mixed information.
arXiv Detail & Related papers (2023-04-28T06:56:07Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - The Enforcers: Consistent Sparse-Discrete Methods for Constraining
Informative Emergent Communication [5.432350993419402]
Communication enables agents to cooperate to achieve their goals.
Recent work in learning sparse communication suffers from high variance training where, the price of decreasing communication is a decrease in reward, particularly in cooperative tasks.
This research addresses the above issues by limiting the loss in reward of decreasing communication and eliminating the penalty for discretization.
arXiv Detail & Related papers (2022-01-19T07:31:06Z) - Pow-Wow: A Dataset and Study on Collaborative Communication in Pommerman [12.498028338281625]
In multi-agent learning, agents must coordinate with each other in order to succeed. For humans, this coordination is typically accomplished through the use of language.
We construct Pow-Wow, a new dataset for studying situated goal-directed human communication.
We analyze the types of communications which result in effective game strategies, annotate them accordingly, and present corpus-level statistical analysis of how trends in communications affect game outcomes.
arXiv Detail & Related papers (2020-09-13T07:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.