Joint Attention for Multi-Agent Coordination and Social Learning
- URL: http://arxiv.org/abs/2104.07750v1
- Date: Thu, 15 Apr 2021 20:14:19 GMT
- Title: Joint Attention for Multi-Agent Coordination and Social Learning
- Authors: Dennis Lee, Natasha Jaques, Chase Kew, Douglas Eck, Dale Schuurmans,
Aleksandra Faust
- Abstract summary: We show that joint attention can be useful as a mechanism for improving multi-agent coordination and social learning.
Joint attention leads to higher performance than a competitive centralized critic baseline across multiple environments.
Taken together, these findings suggest that joint attention may be a useful inductive bias for multi-agent learning.
- Score: 108.31232213078597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Joint attention - the ability to purposefully coordinate attention with
another agent, and mutually attend to the same thing -- is a critical component
of human social cognition. In this paper, we ask whether joint attention can be
useful as a mechanism for improving multi-agent coordination and social
learning. We first develop deep reinforcement learning (RL) agents with a
recurrent visual attention architecture. We then train agents to minimize the
difference between the attention weights that they apply to the environment at
each timestep, and the attention of other agents. Our results show that this
joint attention incentive improves agents' ability to solve difficult
coordination tasks, by reducing the exponential cost of exploring the joint
multi-agent action space. Joint attention leads to higher performance than a
competitive centralized critic baseline across multiple environments. Further,
we show that joint attention enhances agents' ability to learn from experts
present in their environment, even when completing hard exploration tasks that
do not require coordination. Taken together, these findings suggest that joint
attention may be a useful inductive bias for multi-agent learning.
Related papers
- Improving How Agents Cooperate: Attention Schemas in Artificial Neural Networks [0.0]
Growing evidence suggests that the brain uses an "attention schema" to monitor, predict, and help control attention.
It has also been suggested that an attention schema improves social intelligence by allowing one person to better predict another.
Given their potential advantages, attention schemas have been increasingly tested in machine learning.
arXiv Detail & Related papers (2024-11-01T19:18:07Z) - Inverse Attention Agent for Multi-Agent System [6.196239958087161]
A major challenge for Multi-Agent Systems is enabling agents to adapt dynamically to diverse environments in which opponents and teammates may continually change.
We introduce Inverse Attention Agents that adopt concepts from the Theory of Mind, implemented algorithmically using an attention mechanism and trained in an end-to-end manner.
We demonstrate that the inverse attention network successfully infers the attention of other agents, and that this information improves agent performance.
arXiv Detail & Related papers (2024-10-29T06:59:11Z) - Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Cognitive Insights and Stable Coalition Matching for Fostering Multi-Agent Cooperation [6.536780912510439]
We propose a novel matching coalition mechanism that leverages the strengths of agents with different ToM levels.
Our work demonstrates the potential of leveraging ToM to create more sophisticated and human-like coordination strategies.
arXiv Detail & Related papers (2024-05-28T10:59:33Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Joint Intrinsic Motivation for Coordinated Exploration in Multi-Agent
Deep Reinforcement Learning [0.0]
We propose an approach for rewarding strategies where agents collectively exhibit novel behaviors.
Jim rewards joint trajectories based on a centralized measure of novelty designed to function in continuous environments.
Results show that joint exploration is crucial for solving tasks where the optimal strategy requires a high level of coordination.
arXiv Detail & Related papers (2024-02-06T13:02:00Z) - Attention Schema in Neural Agents [66.43628974353683]
In cognitive neuroscience, Attention Theory (AST) supports the idea of distinguishing attention from AS.
AST predicts that an agent can use its own AS to also infer the states of other agents' attention.
We explore different ways in which attention and AS interact with each other.
arXiv Detail & Related papers (2023-05-27T05:40:34Z) - UneVEn: Universal Value Exploration for Multi-Agent Reinforcement
Learning [53.73686229912562]
We propose a novel MARL approach called Universal Value Exploration (UneVEn)
UneVEn learns a set of related tasks simultaneously with a linear decomposition of universal successor features.
Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.
arXiv Detail & Related papers (2020-10-06T19:08:47Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.