Multiagent Deep Reinforcement Learning: Challenges and Directions
Towards Human-Like Approaches
- URL: http://arxiv.org/abs/2106.15691v1
- Date: Tue, 29 Jun 2021 19:53:15 GMT
- Title: Multiagent Deep Reinforcement Learning: Challenges and Directions
Towards Human-Like Approaches
- Authors: Annie Wong, Thomas B\"ack, Anna V. Kononova, Aske Plaat
- Abstract summary: We present the most common multiagent problem representations and their main challenges.
We identify five research areas that address one or more of these challenges.
We suggest that, for multiagent reinforcement learning to be successful, future research addresses these challenges with an interdisciplinary approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper surveys the field of multiagent deep reinforcement learning. The
combination of deep neural networks with reinforcement learning has gained
increased traction in recent years and is slowly shifting the focus from
single-agent to multiagent environments. Dealing with multiple agents is
inherently more complex as (a) the future rewards depend on the joint actions
of multiple players and (b) the computational complexity of functions
increases. We present the most common multiagent problem representations and
their main challenges, and identify five research areas that address one or
more of these challenges: centralised training and decentralised execution,
opponent modelling, communication, efficient coordination, and reward shaping.
We find that many computational studies rely on unrealistic assumptions or are
not generalisable to other settings; they struggle to overcome the curse of
dimensionality or nonstationarity. Approaches from psychology and sociology
capture promising relevant behaviours such as communication and coordination.
We suggest that, for multiagent reinforcement learning to be successful, future
research addresses these challenges with an interdisciplinary approach to open
up new possibilities for more human-oriented solutions in multiagent
reinforcement learning.
Related papers
- Active Legibility in Multiagent Reinforcement Learning [3.7828554251478734]
The legibility-oriented framework allows agents to conduct legible actions so as to help others optimise their behaviors.
The experimental results demonstrate that the new framework is more efficient and costs less training time compared to several multiagent reinforcement learning algorithms.
arXiv Detail & Related papers (2024-10-28T12:15:49Z) - Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Learning Emergence of Interaction Patterns across Independent RL Agents in Multi-Agent Environments [3.0284592792243794]
Bottom Up Network (BUN) treats the collective of multi-agents as a unified entity.
Our empirical evaluations across a variety of cooperative multi-agent scenarios, including tasks such as cooperative navigation and traffic control, consistently demonstrate BUN's superiority over baseline methods with substantially reduced computational costs.
arXiv Detail & Related papers (2024-10-03T14:25:02Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning [58.84311336011451]
We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
arXiv Detail & Related papers (2024-05-03T04:12:19Z) - Joint Intrinsic Motivation for Coordinated Exploration in Multi-Agent
Deep Reinforcement Learning [0.0]
We propose an approach for rewarding strategies where agents collectively exhibit novel behaviors.
Jim rewards joint trajectories based on a centralized measure of novelty designed to function in continuous environments.
Results show that joint exploration is crucial for solving tasks where the optimal strategy requires a high level of coordination.
arXiv Detail & Related papers (2024-02-06T13:02:00Z) - A Review of Cooperation in Multi-agent Learning [5.334450724000142]
Cooperation in multi-agent learning (MAL) is a topic at the intersection of numerous disciplines.
This paper provides an overview of the fundamental concepts, problem settings and algorithms of multi-agent learning.
arXiv Detail & Related papers (2023-12-08T16:42:15Z) - Responsible Emergent Multi-Agent Behavior [2.9370710299422607]
State of the art in Responsible AI has ignored one crucial point: human problems are multi-agent problems.
From driving in traffic to negotiating economic policy, human problem-solving involves interaction and the interplay of the actions and motives of multiple individuals.
This dissertation develops the study of responsible emergent multi-agent behavior.
arXiv Detail & Related papers (2023-11-02T21:37:32Z) - Neural Amortized Inference for Nested Multi-agent Reasoning [54.39127942041582]
We propose a novel approach to bridge the gap between human-like inference capabilities and computational limitations.
We evaluate our method in two challenging multi-agent interaction domains.
arXiv Detail & Related papers (2023-08-21T22:40:36Z) - Q-Mixing Network for Multi-Agent Pathfinding in Partially Observable
Grid Environments [62.997667081978825]
We consider the problem of multi-agent navigation in partially observable grid environments.
We suggest utilizing the reinforcement learning approach when the agents, first, learn the policies that map observations to actions and then follow these policies to reach their goals.
arXiv Detail & Related papers (2021-08-13T09:44:47Z) - Cooperative Exploration for Multi-Agent Deep Reinforcement Learning [127.4746863307944]
We propose cooperative multi-agent exploration (CMAE) for deep reinforcement learning.
The goal is selected from multiple projected state spaces via a normalized entropy-based technique.
We demonstrate that CMAE consistently outperforms baselines on various tasks.
arXiv Detail & Related papers (2021-07-23T20:06:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.