Developing, Evaluating and Scaling Learning Agents in Multi-Agent
Environments
- URL: http://arxiv.org/abs/2209.10958v1
- Date: Thu, 22 Sep 2022 12:28:29 GMT
- Title: Developing, Evaluating and Scaling Learning Agents in Multi-Agent
Environments
- Authors: Ian Gemp, Thomas Anthony, Yoram Bachrach, Avishkar Bhoopchand, Kalesha
Bullard, Jerome Connor, Vibhavari Dasagi, Bart De Vylder, Edgar
Duenez-Guzman, Romuald Elie, Richard Everett, Daniel Hennes, Edward Hughes,
Mina Khan, Marc Lanctot, Kate Larson, Guy Lever, Siqi Liu, Luke Marris, Kevin
R. McKee, Paul Muller, Julien Perolat, Florian Strub, Andrea Tacchetti,
Eugene Tarassov, Zhe Wang, Karl Tuyls
- Abstract summary: The Game Theory & Multi-Agent team at DeepMind studies several aspects of multi-agent learning.
A signature aim of our group is to use the resources and expertise made available to us at DeepMind in deep reinforcement learning.
- Score: 38.16072318606355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Game Theory & Multi-Agent team at DeepMind studies several aspects of
multi-agent learning ranging from computing approximations to fundamental
concepts in game theory to simulating social dilemmas in rich spatial
environments and training 3-d humanoids in difficult team coordination tasks. A
signature aim of our group is to use the resources and expertise made available
to us at DeepMind in deep reinforcement learning to explore multi-agent systems
in complex environments and use these benchmarks to advance our understanding.
Here, we summarise the recent work of our team and present a taxonomy that we
feel highlights many important open challenges in multi-agent research.
Related papers
- A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond [84.95530356322621]
This survey presents a systematic review of the advancements in code intelligence.
It covers over 50 representative models and their variants, more than 20 categories of tasks, and an extensive coverage of over 680 related works.
Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence.
arXiv Detail & Related papers (2024-03-21T08:54:56Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Enabling Multi-Agent Transfer Reinforcement Learning via Scenario
Independent Representation [0.7366405857677227]
Multi-Agent Reinforcement Learning (MARL) algorithms are widely adopted in tackling complex tasks that require collaboration and competition among agents.
We introduce a novel framework that enables transfer learning for MARL through unifying various state spaces into fixed-size inputs.
We show significant enhancements in multi-agent learning performance using maneuvering skills learned from other scenarios compared to agents learning from scratch.
arXiv Detail & Related papers (2024-02-13T02:48:18Z) - A Review of Cooperation in Multi-agent Learning [5.334450724000142]
Cooperation in multi-agent learning (MAL) is a topic at the intersection of numerous disciplines.
This paper provides an overview of the fundamental concepts, problem settings and algorithms of multi-agent learning.
arXiv Detail & Related papers (2023-12-08T16:42:15Z) - Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation [52.930183136111864]
We propose using scorable negotiation to evaluate Large Language Models (LLMs)
To reach an agreement, agents must have strong arithmetic, inference, exploration, and planning capabilities.
We provide procedures to create new games and increase games' difficulty to have an evolving benchmark.
arXiv Detail & Related papers (2023-09-29T13:33:06Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - Multi-Agent Interplay in a Competitive Survival Environment [0.0]
This thesis is part of the author's thesis "Multi-Agent Interplay in a Competitive Survival Environment" for the Master's Degree in Artificial Intelligence and Robotics at Sapienza University of Rome, 2022.
arXiv Detail & Related papers (2023-01-19T12:04:03Z) - Deep Reinforcement Learning for Multi-Agent Interaction [14.532965827043254]
The Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control.
This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.
arXiv Detail & Related papers (2022-08-02T21:55:56Z) - Cooperative Exploration for Multi-Agent Deep Reinforcement Learning [127.4746863307944]
We propose cooperative multi-agent exploration (CMAE) for deep reinforcement learning.
The goal is selected from multiple projected state spaces via a normalized entropy-based technique.
We demonstrate that CMAE consistently outperforms baselines on various tasks.
arXiv Detail & Related papers (2021-07-23T20:06:32Z) - Multiagent Deep Reinforcement Learning: Challenges and Directions
Towards Human-Like Approaches [0.0]
We present the most common multiagent problem representations and their main challenges.
We identify five research areas that address one or more of these challenges.
We suggest that, for multiagent reinforcement learning to be successful, future research addresses these challenges with an interdisciplinary approach.
arXiv Detail & Related papers (2021-06-29T19:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.