Joint Attention for Multi-Agent Coordination and Social Learning
- URL: http://arxiv.org/abs/2104.07750v1
- Date: Thu, 15 Apr 2021 20:14:19 GMT
- Title: Joint Attention for Multi-Agent Coordination and Social Learning
- Authors: Dennis Lee, Natasha Jaques, Chase Kew, Douglas Eck, Dale Schuurmans,
Aleksandra Faust
- Abstract summary: We show that joint attention can be useful as a mechanism for improving multi-agent coordination and social learning.
Joint attention leads to higher performance than a competitive centralized critic baseline across multiple environments.
Taken together, these findings suggest that joint attention may be a useful inductive bias for multi-agent learning.
- Score: 108.31232213078597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Joint attention - the ability to purposefully coordinate attention with
another agent, and mutually attend to the same thing -- is a critical component
of human social cognition. In this paper, we ask whether joint attention can be
useful as a mechanism for improving multi-agent coordination and social
learning. We first develop deep reinforcement learning (RL) agents with a
recurrent visual attention architecture. We then train agents to minimize the
difference between the attention weights that they apply to the environment at
each timestep, and the attention of other agents. Our results show that this
joint attention incentive improves agents' ability to solve difficult
coordination tasks, by reducing the exponential cost of exploring the joint
multi-agent action space. Joint attention leads to higher performance than a
competitive centralized critic baseline across multiple environments. Further,
we show that joint attention enhances agents' ability to learn from experts
present in their environment, even when completing hard exploration tasks that
do not require coordination. Taken together, these findings suggest that joint
attention may be a useful inductive bias for multi-agent learning.
Related papers
- Cognitive Insights and Stable Coalition Matching for Fostering Multi-Agent Cooperation [6.536780912510439]
We propose a novel matching coalition mechanism that leverages the strengths of agents with different ToM levels.
Our work demonstrates the potential of leveraging ToM to create more sophisticated and human-like coordination strategies.
arXiv Detail & Related papers (2024-05-28T10:59:33Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Joint Intrinsic Motivation for Coordinated Exploration in Multi-Agent
Deep Reinforcement Learning [0.0]
We propose an approach for rewarding strategies where agents collectively exhibit novel behaviors.
Jim rewards joint trajectories based on a centralized measure of novelty designed to function in continuous environments.
Results show that joint exploration is crucial for solving tasks where the optimal strategy requires a high level of coordination.
arXiv Detail & Related papers (2024-02-06T13:02:00Z) - Attention Schema in Neural Agents [66.43628974353683]
In cognitive neuroscience, Attention Theory (AST) supports the idea of distinguishing attention from AS.
AST predicts that an agent can use its own AS to also infer the states of other agents' attention.
We explore different ways in which attention and AS interact with each other.
arXiv Detail & Related papers (2023-05-27T05:40:34Z) - Decentralized Adversarial Training over Graphs [55.28669771020857]
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
arXiv Detail & Related papers (2023-03-23T15:05:16Z) - ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward [29.737986509769808]
We propose a self-supervised intrinsic reward ELIGN - expectation alignment.
Similar to how animals collaborate in a decentralized manner with those in their vicinity, agents trained with expectation alignment learn behaviors that match their neighbors' expectations.
We show that agent coordination improves through expectation alignment because agents learn to divide tasks amongst themselves, break coordination symmetries, and confuse adversaries.
arXiv Detail & Related papers (2022-10-09T22:24:44Z) - Multiagent Deep Reinforcement Learning: Challenges and Directions
Towards Human-Like Approaches [0.0]
We present the most common multiagent problem representations and their main challenges.
We identify five research areas that address one or more of these challenges.
We suggest that, for multiagent reinforcement learning to be successful, future research addresses these challenges with an interdisciplinary approach.
arXiv Detail & Related papers (2021-06-29T19:53:15Z) - UneVEn: Universal Value Exploration for Multi-Agent Reinforcement
Learning [53.73686229912562]
We propose a novel MARL approach called Universal Value Exploration (UneVEn)
UneVEn learns a set of related tasks simultaneously with a linear decomposition of universal successor features.
Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.
arXiv Detail & Related papers (2020-10-06T19:08:47Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.