Specializing Inter-Agent Communication in Heterogeneous Multi-Agent
Reinforcement Learning using Agent Class Information
- URL: http://arxiv.org/abs/2012.07617v2
- Date: Wed, 10 Mar 2021 15:19:56 GMT
- Title: Specializing Inter-Agent Communication in Heterogeneous Multi-Agent
Reinforcement Learning using Agent Class Information
- Authors: Douglas De Rizzo Meneghetti, Reinaldo Augusto da Costa Bianchi
- Abstract summary: This work proposes the representation of multi-agent communication capabilities as a directed labeled heterogeneous agent graph.
We also introduce a neural network architecture that specializes communication in fully cooperative heterogeneous multi-agent tasks.
- Score: 1.713291434132985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by recent advances in agent communication with graph neural
networks, this work proposes the representation of multi-agent communication
capabilities as a directed labeled heterogeneous agent graph, in which node
labels denote agent classes and edge labels, the communication type between two
classes of agents. We also introduce a neural network architecture that
specializes communication in fully cooperative heterogeneous multi-agent tasks
by learning individual transformations to the exchanged messages between each
pair of agent classes. By also employing encoding and action selection modules
with parameter sharing for environments with heterogeneous agents, we
demonstrate comparable or superior performance in environments where a larger
number of agent classes operates.
Related papers
- Relative Representations of Latent Spaces enable Efficient Semantic Channel Equalization [11.052047963214006]
We present a novel semantic equalization algorithm that enables communication between agents with different languages without additional retraining.
Our numerical results show the effectiveness of the proposed approach allowing seamless communication between agents with radically different models.
arXiv Detail & Related papers (2024-11-29T14:08:48Z) - Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence [79.5316642687565]
Existing multi-agent frameworks often struggle with integrating diverse capable third-party agents.
We propose the Internet of Agents (IoA), a novel framework that addresses these limitations.
IoA introduces an agent integration protocol, an instant-messaging-like architecture design, and dynamic mechanisms for agent teaming and conversation flow control.
arXiv Detail & Related papers (2024-07-09T17:33:24Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Scalable Communication for Multi-Agent Reinforcement Learning via
Transformer-Based Email Mechanism [9.607941773452925]
Communication can impressively improve cooperation in multi-agent reinforcement learning (MARL)
We propose a novel framework Transformer-based Email Mechanism (TEM) to tackle the scalability problem of MARL communication for partially-observed tasks.
arXiv Detail & Related papers (2023-01-05T05:34:30Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Multiagent Multimodal Categorization for Symbol Emergence: Emergent
Communication via Interpersonal Cross-modal Inference [4.964816143841663]
This paper describes a computational model of multiagent multimodal categorization that realizes emergent communication.
Inter-MDM enables agents to form multimodal categories and appropriately share signs between agents.
It is shown that emergent communication improves categorization accuracy, even when some sensory modalities are missing.
arXiv Detail & Related papers (2021-09-15T10:20:54Z) - Towards Heterogeneous Multi-Agent Reinforcement Learning with Graph
Neural Networks [1.370633147306388]
This work proposes a neural network architecture that learns policies for multiple agent classes in a heterogeneous multi-agent reinforcement setting.
Results have shown that specializing the communication channels between entity classes is a promising step to achieve higher performance in environments composed of heterogeneous entities.
arXiv Detail & Related papers (2020-09-28T09:15:04Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.