Learning to Interact in World Latent for Team Coordination
- URL: http://arxiv.org/abs/2509.25550v3
- Date: Thu, 02 Oct 2025 20:45:00 GMT
- Title: Learning to Interact in World Latent for Team Coordination
- Authors: Dongsu Lee, Daehee Lee, Yaru Niu, Honguk Woo, Amy Zhang, Ding Zhao,
- Abstract summary: This work presents a novel representation learning framework, interactive world latent (IWoL), to facilitate team coordination in multi-agent reinforcement learning (MARL)<n>Our key insight is to construct a learnable representation space that jointly captures inter-agent relations and task-specific world information by directly modeling communication protocols.<n>Our representation can be used not only as an implicit latent for each agent, but also as an explicit message for communication.
- Score: 53.51290193631586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents a novel representation learning framework, interactive world latent (IWoL), to facilitate team coordination in multi-agent reinforcement learning (MARL). Building effective representation for team coordination is a challenging problem, due to the intricate dynamics emerging from multi-agent interaction and incomplete information induced by local observations. Our key insight is to construct a learnable representation space that jointly captures inter-agent relations and task-specific world information by directly modeling communication protocols. This representation, we maintain fully decentralized execution with implicit coordination, all while avoiding the inherent drawbacks of explicit message passing, e.g., slower decision-making, vulnerability to malicious attackers, and sensitivity to bandwidth constraints. In practice, our representation can be used not only as an implicit latent for each agent, but also as an explicit message for communication. Across four challenging MARL benchmarks, we evaluate both variants and show that IWoL provides a simple yet powerful key for team coordination. Moreover, we demonstrate that our representation can be combined with existing MARL algorithms to further enhance their performance.
Related papers
- Multi-Agent Deep Reinforcement Learning Under Constrained Communications [2.7126292487109005]
We present a distributed multi-agent reinforcement learning (MARL) framework that removes the need for centralized critics or global information.<n>We develop a novel Graph Attention Network (D-GAT) that performs global state inference through multi-hop communication.<n>We also develop the distributed graph-attention MAPPO (DG-MAPPO) -- a distributed MARL framework where agents optimize local policies and value functions.
arXiv Detail & Related papers (2026-01-22T21:07:18Z) - Scalable Multiagent Reinforcement Learning with Collective Influence Estimation [5.050035210247092]
This paper proposes a multiagent learning framework augmented with a Collective Influence Estimation Network.<n>By explicitly modeling the collective influence of other agents on the task object, each agent can infer critical interaction information.<n> Experimental results show that the proposed method achieves stable and efficient coordination under communication-limited environments.
arXiv Detail & Related papers (2026-01-13T04:24:11Z) - In-Context Reinforcement Learning via Communicative World Models [49.00028802135605]
This work formulates ICRL as a two-agent emergent communication problem.<n>It introduces CORAL, a framework that learns a transferable communicative context.<n>Our experiments demonstrate that this approach enables the CA to achieve significant gains in sample efficiency.
arXiv Detail & Related papers (2025-08-08T19:23:23Z) - Communicating Plans, Not Percepts: Scalable Multi-Agent Coordination with Embodied World Models [0.0]
A central question in Multi-Agent Reinforcement Learning (MARL) is whether to engineer communication protocols or learn them end-to-end.<n>We propose and compare two communication strategies for a cooperative task-allocation problem.<n>Our experiments reveal that while emergent communication is viable in simple settings, the engineered, world model-based approach shows superior performance, sample efficiency, and scalability as complexity increases.
arXiv Detail & Related papers (2025-08-04T21:29:07Z) - Contextual Knowledge Sharing in Multi-Agent Reinforcement Learning with Decentralized Communication and Coordination [0.9776703963093367]
Multi-Agent Reinforcement Learning (Dec-MARL) has emerged as a pivotal approach for addressing complex tasks in dynamic environments.<n>This paper presents a novel Dec-MARL framework that integrates peer-to-peer communication and coordination, incorporating goal-awareness and time-awareness into the agents' knowledge-sharing processes.
arXiv Detail & Related papers (2025-01-26T22:49:50Z) - Tacit Learning with Adaptive Information Selection for Cooperative Multi-Agent Reinforcement Learning [13.918498667158119]
We introduce a novel cooperative MARL framework based on information selection and tacit learning.<n>We integrate gating and selection mechanisms, allowing agents to adaptively filter information based on environmental changes.<n>Experiments on popular MARL benchmarks show that our framework can be seamlessly integrated with state-of-the-art algorithms.
arXiv Detail & Related papers (2024-12-20T07:55:59Z) - Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Cooperative Policy Learning with Pre-trained Heterogeneous Observation
Representations [51.8796674904734]
We propose a new cooperative learning framework with pre-trained heterogeneous observation representations.
We employ an encoder-decoder based graph attention to learn the intricate interactions and heterogeneous representations.
arXiv Detail & Related papers (2020-12-24T04:52:29Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.