Control as Probabilistic Inference as an Emergent Communication
Mechanism in Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2307.05004v1
- Date: Tue, 11 Jul 2023 03:53:46 GMT
- Title: Control as Probabilistic Inference as an Emergent Communication
Mechanism in Multi-Agent Reinforcement Learning
- Authors: Tomoaki Nakamura, Akira Taniguchi, Tadahiro Taniguchi
- Abstract summary: This paper proposes a generative probabilistic model integrating emergent communication and reinforcement learning.
We show that the proposed PGM can infer meaningful messages to achieve the cooperative task.
- Score: 7.0471949371778795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a generative probabilistic model integrating emergent
communication and multi-agent reinforcement learning. The agents plan their
actions by probabilistic inference, called control as inference, and
communicate using messages that are latent variables and estimated based on the
planned actions. Through these messages, each agent can send information about
its actions and know information about the actions of another agent. Therefore,
the agents change their actions according to the estimated messages to achieve
cooperative tasks. This inference of messages can be considered as
communication, and this procedure can be formulated by the Metropolis-Hasting
naming game. Through experiments in the grid world environment, we show that
the proposed PGM can infer meaningful messages to achieve the cooperative task.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Verco: Learning Coordinated Verbal Communication for Multi-agent Reinforcement Learning [42.27106057372819]
We propose a novel multi-agent reinforcement learning algorithm that embeds large language models into agents.
The framework has a message module and an action module.
Experiments conducted on the Overcooked game demonstrate our method significantly enhances the learning efficiency and performance of existing methods.
arXiv Detail & Related papers (2024-04-27T05:10:33Z) - T2MAC: Targeted and Trusted Multi-Agent Communication through Selective
Engagement and Evidence-Driven Integration [15.91335141803629]
We propose Targeted and Trusted Multi-Agent Communication (T2MAC) to help agents learn selective engagement and evidence-driven integration.
T2MAC enables agents to craft individualized messages, pinpoint ideal communication windows, and engage with reliable partners.
We evaluate our method on a diverse set of cooperative multi-agent tasks, with varying difficulties, involving different scales.
arXiv Detail & Related papers (2024-01-19T18:00:33Z) - Inferring the Goals of Communicating Agents from Actions and
Instructions [47.5816320484482]
We introduce a model of a cooperative team where one agent, the principal, may communicate natural language instructions about their shared plan to another agent, the assistant.
We show how a third person observer can infer the team's goal via multi-modal inverse planning from actions and instructions.
We evaluate this approach by comparing it with human goal inferences in a multi-agent gridworld, finding that our model's inferences closely correlate with human judgments.
arXiv Detail & Related papers (2023-06-28T13:43:46Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Beyond Transmitting Bits: Context, Semantics, and Task-Oriented
Communications [88.68461721069433]
Next generation systems can be potentially enriched by folding message semantics and goals of communication into their design.
This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications.
The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications.
arXiv Detail & Related papers (2022-07-19T16:00:57Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Inference-Based Deterministic Messaging For Multi-Agent Communication [1.8275108630751844]
We study learning in matrix-based signaling games to show that decentralized methods can converge to a suboptimal policy.
We then propose a modification to the messaging policy, in which the sender deterministically chooses the best message that helps the receiver to infer the sender's observation.
arXiv Detail & Related papers (2021-03-03T03:09:22Z) - Learning Emergent Discrete Message Communication for Cooperative
Reinforcement Learning [36.468498804251574]
We show that discrete message communication has performance comparable to continuous message communication.
We propose an approach that allows humans to interactively send discrete messages to agents.
arXiv Detail & Related papers (2021-02-24T20:44:14Z) - SPA: Verbal Interactions between Agents and Avatars in Shared Virtual
Environments using Propositional Planning [61.335252950832256]
Sense-Plan-Ask, or SPA, generates plausible verbal interactions between virtual human-like agents and user avatars in shared virtual environments.
We find that our algorithm creates a small runtime cost and enables agents to complete their goals more effectively than agents without the ability to leverage natural-language communication.
arXiv Detail & Related papers (2020-02-08T23:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.