Interactive inference: a multi-agent model of cooperative joint actions
- URL: http://arxiv.org/abs/2210.13113v1
- Date: Mon, 24 Oct 2022 11:01:29 GMT
- Title: Interactive inference: a multi-agent model of cooperative joint actions
- Authors: Domenico Maisto, Francesco Donnarumma, Giovanni Pezzulo
- Abstract summary: We advance a novel model of multi-agent, cooperative joint actions based on the cognitive framework of active inference.
We show that interactive inference supports successful multi-agent joint actions and reproduces key cognitive and behavioral dynamics of "leaderless" and "leader-follower" joint actions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We advance a novel computational model of multi-agent, cooperative joint
actions that is grounded in the cognitive framework of active inference. The
model assumes that to solve a joint task, such as pressing together a red or
blue button, two (or more) agents engage in a process of interactive inference.
Each agent maintains probabilistic beliefs about the goal of the joint task
(e.g., should we press the red or blue button?) and updates them by observing
the other agent's movements, while in turn selecting movements that make his
own intentions legible and easy to infer by the other agent (i.e., sensorimotor
communication). Over time, the interactive inference aligns both the beliefs
and the behavioral strategies of the agents, hence ensuring the success of the
joint action. We exemplify the functioning of the model in two simulations. The
first simulation illustrates a ''leaderless'' joint action. It shows that when
two agents lack a strong preference about their joint task goal, they jointly
infer it by observing each other's movements. In turn, this helps the
interactive alignment of their beliefs and behavioral strategies. The second
simulation illustrates a "leader-follower" joint action. It shows that when one
agent ("leader") knows the true joint goal, it uses sensorimotor communication
to help the other agent ("follower") infer it, even if doing this requires
selecting a more costly individual plan. These simulations illustrate that
interactive inference supports successful multi-agent joint actions and
reproduces key cognitive and behavioral dynamics of "leaderless" and
"leader-follower" joint actions observed in human-human experiments. In sum,
interactive inference provides a cognitively inspired, formal framework to
realize cooperative joint actions and consensus in multi-agent systems.
Related papers
- AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Inferring the Goals of Communicating Agents from Actions and
Instructions [47.5816320484482]
We introduce a model of a cooperative team where one agent, the principal, may communicate natural language instructions about their shared plan to another agent, the assistant.
We show how a third person observer can infer the team's goal via multi-modal inverse planning from actions and instructions.
We evaluate this approach by comparing it with human goal inferences in a multi-agent gridworld, finding that our model's inferences closely correlate with human judgments.
arXiv Detail & Related papers (2023-06-28T13:43:46Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - Diversifying Agent's Behaviors in Interactive Decision Models [11.125175635860169]
Modelling other agents' behaviors plays an important role in decision models for interactions among multiple agents.
In this article, we investigate into diversifying behaviors of other agents in the subject agent's decision model prior to their interactions.
arXiv Detail & Related papers (2022-03-06T23:05:00Z) - Resonating Minds -- Emergent Collaboration Through Hierarchical Active
Inference [0.0]
We investigate how efficient, automatic coordination processes at the level of mental states (intentions, goals) can lead to collaborative situated problem-solving.
We present a model of hierarchical active inference for collaborative agents (HAICA)
We show that belief resonance and active inference allow for quick and efficient agent coordination, and thus can serve as a building block for collaborative cognitive agents.
arXiv Detail & Related papers (2021-12-02T13:23:44Z) - ToM2C: Target-oriented Multi-agent Communication and Cooperation with
Theory of Mind [18.85252946546942]
Theory of Mind (ToM) builds socially intelligent agents who are able to communicate and cooperate effectively.
We demonstrate the idea in two typical target-oriented multi-agent tasks: cooperative navigation and multi-sensor target coverage.
arXiv Detail & Related papers (2021-10-15T18:29:55Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Cooperative and Competitive Biases for Multi-Agent Reinforcement
Learning [12.676356746752893]
Training a multi-agent reinforcement learning (MARL) algorithm is more challenging than training a single-agent reinforcement learning algorithm.
We propose an algorithm that boosts MARL training using the biased action information of other agents based on a friend-or-foe concept.
We empirically demonstrate that our algorithm outperforms existing algorithms in various mixed cooperative-competitive environments.
arXiv Detail & Related papers (2021-01-18T05:52:22Z) - Investigating Human Response, Behaviour, and Preference in Joint-Task
Interaction [3.774610219328564]
We have designed an experiment in order to examine human behaviour and response as they interact with Explainable Planning (XAIP) agents.
We also present the results from an empirical analysis where we examined the behaviour of the two agents for simulated users.
arXiv Detail & Related papers (2020-11-27T22:16:59Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.