Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents
- URL: http://arxiv.org/abs/2110.05769v1
- Date: Tue, 12 Oct 2021 06:56:11 GMT
- Title: Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents
- Authors: Shivansh Patel, Saim Wani, Unnat Jain, Alexander Schwing, Svetlana
Lazebnik, Manolis Savva, Angel X. Chang
- Abstract summary: We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
- Score: 83.52684405389445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communication between embodied AI agents has received increasing attention in
recent years. Despite its use, it is still unclear whether the learned
communication is interpretable and grounded in perception. To study the
grounding of emergent forms of communication, we first introduce the
collaborative multi-object navigation task CoMON. In this task, an oracle agent
has detailed environment information in the form of a map. It communicates with
a navigator agent that perceives the environment visually and is tasked to find
a sequence of goals. To succeed at the task, effective communication is
essential. CoMON hence serves as a basis to study different communication
mechanisms between heterogeneous agents, that is, agents with different
capabilities and roles. We study two common communication mechanisms and
analyze their communication patterns through an egocentric and spatial lens. We
show that the emergent communication can be grounded to the agent observations
and the spatial structure of the 3D environment. Video summary:
https://youtu.be/kLv2rxO9t0g
Related papers
- Bidirectional Emergent Language in Situated Environments [4.950411915351642]
We introduce two novel cooperative environments: Multi-Agent Pong and Collectors.
optimal performance requires the emergence of a communication protocol, but moderate success can be achieved without one.
We find that the emerging communication is sparse, with the agents only generating meaningful messages and acting upon incoming messages in states where they cannot succeed without coordination.
arXiv Detail & Related papers (2024-08-26T21:25:44Z) - GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment [72.96949760114575]
We propose a novel cooperative communication framework, Goal-Oriented Mental Alignment (GOMA)
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between parts of agents' mental states that are relevant to the goals.
We evaluate our approach against strong baselines in two challenging environments, Overcooked (a multiplayer game) and VirtualHome (a household simulator)
arXiv Detail & Related papers (2024-03-17T03:52:52Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Learning to Draw: Emergent Communication through Sketching [0.0]
We show how agents can learn to communicate in order to collaboratively solve tasks.
Existing research has focused on language, with a learned communication channel transmitting sequences of discrete tokens between the agents.
Our agents are parameterised by deep neural networks, and the drawing procedure is differentiable, allowing for end-to-end training.
In the framework of a referential communication game, we demonstrate that agents can not only successfully learn to communicate by drawing, but with appropriate inductive biases, can do so in a fashion that humans can interpret.
arXiv Detail & Related papers (2021-06-03T18:17:55Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Networked Multi-Agent Reinforcement Learning with Emergent Communication [18.47483427884452]
Multi-Agent Reinforcement Learning (MARL) methods find optimal policies for agents that operate in the presence of other learning agents.
One way to coordinate is by learning to communicate with each other.
Can the agents develop a language while learning to perform a common task?
arXiv Detail & Related papers (2020-04-06T16:13:23Z) - Learning to cooperate: Emergent communication in multi-agent navigation [49.11609702016523]
We show that agents performing a cooperative navigation task learn an interpretable communication protocol.
An analysis of the agents' policies reveals that emergent signals spatially cluster the state space.
Using populations of agents, we show that the emergent protocol has basic compositional structure.
arXiv Detail & Related papers (2020-04-02T16:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.