ToM2C: Target-oriented Multi-agent Communication and Cooperation with
Theory of Mind
- URL: http://arxiv.org/abs/2111.09189v1
- Date: Fri, 15 Oct 2021 18:29:55 GMT
- Title: ToM2C: Target-oriented Multi-agent Communication and Cooperation with
Theory of Mind
- Authors: Yuanfei Wang, Fangwei Zhong, Jing Xu, Yizhou Wang
- Abstract summary: Theory of Mind (ToM) builds socially intelligent agents who are able to communicate and cooperate effectively.
We demonstrate the idea in two typical target-oriented multi-agent tasks: cooperative navigation and multi-sensor target coverage.
- Score: 18.85252946546942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being able to predict the mental states of others is a key factor to
effective social interaction. It is also crucial for distributed multi-agent
systems, where agents are required to communicate and cooperate. In this paper,
we introduce such an important social-cognitive skill, i.e. Theory of Mind
(ToM), to build socially intelligent agents who are able to communicate and
cooperate effectively to accomplish challenging tasks. With ToM, each agent is
capable of inferring the mental states and intentions of others according to
its (local) observation. Based on the inferred states, the agents decide "when"
and with "whom" to share their intentions. With the information observed,
inferred, and received, the agents decide their sub-goals and reach a consensus
among the team. In the end, the low-level executors independently take
primitive actions to accomplish the sub-goals. We demonstrate the idea in two
typical target-oriented multi-agent tasks: cooperative navigation and
multi-sensor target coverage. The experiments show that the proposed model not
only outperforms the state-of-the-art methods on reward and communication
efficiency, but also shows good generalization across different scales of the
environment.
Related papers
- Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Reaching Consensus in Cooperative Multi-Agent Reinforcement Learning
with Goal Imagination [16.74629849552254]
We propose a model-based consensus mechanism to explicitly coordinate multiple agents.
The proposed Multi-agent Goal Imagination (MAGI) framework guides agents to reach consensus with an Imagined common goal.
We show that such efficient consensus mechanism can guide all agents cooperatively reaching valuable future states.
arXiv Detail & Related papers (2024-03-05T18:07:34Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Inferring the Goals of Communicating Agents from Actions and
Instructions [47.5816320484482]
We introduce a model of a cooperative team where one agent, the principal, may communicate natural language instructions about their shared plan to another agent, the assistant.
We show how a third person observer can infer the team's goal via multi-modal inverse planning from actions and instructions.
We evaluate this approach by comparing it with human goal inferences in a multi-agent gridworld, finding that our model's inferences closely correlate with human judgments.
arXiv Detail & Related papers (2023-06-28T13:43:46Z) - Resonating Minds -- Emergent Collaboration Through Hierarchical Active
Inference [0.0]
We investigate how efficient, automatic coordination processes at the level of mental states (intentions, goals) can lead to collaborative situated problem-solving.
We present a model of hierarchical active inference for collaborative agents (HAICA)
We show that belief resonance and active inference allow for quick and efficient agent coordination, and thus can serve as a building block for collaborative cognitive agents.
arXiv Detail & Related papers (2021-12-02T13:23:44Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via
Learned Messaging [14.960795846548029]
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation learning abilities of deep neural networks.
This paper considers the case where there is a single, powerful, central agent that can observe the entire observation space, and there are multiple, low powered, local agents that can only receive local observations and cannot communicate with each other.
The job of the central agent is to learn what message to send to different local agents, based on the global observations, but by determining what additional information an individual agent should receive so that it can make a better decision.
arXiv Detail & Related papers (2021-01-18T19:00:12Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.