Massively Multiagent Minigames for Training Generalist Agents
- URL: http://arxiv.org/abs/2406.05071v1
- Date: Fri, 7 Jun 2024 16:41:05 GMT
- Title: Massively Multiagent Minigames for Training Generalist Agents
- Authors: Kyoung Whan Choe, Ryan Sullivan, Joseph Suárez,
- Abstract summary: We present Meta MMO, a collection of many-agent minigames for use as a reinforcement learning benchmark.
Meta MMO is built on top of Neural MMO, a massively multiagent environment that has been the subject of two previous NeurIPS competitions.
- Score: 1.2762029466132794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Meta MMO, a collection of many-agent minigames for use as a reinforcement learning benchmark. Meta MMO is built on top of Neural MMO, a massively multiagent environment that has been the subject of two previous NeurIPS competitions. Our work expands Neural MMO with several computationally efficient minigames. We explore generalization across Meta MMO by learning to play several minigames with a single set of weights. We release the environment, baselines, and training code under the MIT license. We hope that Meta MMO will spur additional progress on Neural MMO and, more generally, will serve as a useful benchmark for many-agent generalization.
Related papers
- Leading the Pack: N-player Opponent Shaping [52.682734939786464]
We extend Opponent Shaping (OS) methods to environments involving multiple co-players and multiple shaping agents.
We find that when playing with a large number of co-players, OS methods' relative performance reduces, suggesting that in the limit OS methods may not perform well.
arXiv Detail & Related papers (2023-12-19T20:01:42Z) - Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent
Learning [36.03451274861878]
Neural MMO 2.0 is a massively multi-agent environment for reinforcement learning research.
It features a flexible task system that allows users to define a broad range of objectives and reward signals.
Version 2.0 is a complete rewrite of its predecessor with three-fold improved performance and compatibility with CleanRL.
arXiv Detail & Related papers (2023-11-07T05:36:39Z) - Towards Effective and Interpretable Human-Agent Collaboration in MOBA
Games: A Communication Perspective [23.600139293202336]
This paper makes the first attempt to investigate human-agent collaboration in MOBA games.
We propose to enable humans and agents to collaborate through explicit communication by designing an efficient Meta-Command Communication-based framework.
We show that MCC agents can collaborate reasonably well with human teammates and even generalize to collaborate with different levels and numbers of human teammates.
arXiv Detail & Related papers (2023-04-23T12:11:04Z) - MineDojo: Building Open-Ended Embodied Agents with Internet-Scale
Knowledge [70.47759528596711]
We introduce MineDojo, a new framework built on the popular Minecraft game.
We propose a novel agent learning algorithm that leverages large pre-trained video-language models as a learned reward function.
Our agent is able to solve a variety of open-ended tasks specified in free-form language without any manually designed dense shaping reward.
arXiv Detail & Related papers (2022-06-17T15:53:05Z) - The Neural MMO Platform for Massively Multiagent Research [49.51549968445566]
Neural MMO is a research platform that combines large agent populations, long time horizons, open-ended tasks, and modular game systems.
We present Neural MMO as free and open source software with active support, ongoing development, documentation, and additional training, logging, and visualization tools.
arXiv Detail & Related papers (2021-10-14T17:54:49Z) - Discovering Multi-Agent Auto-Curricula in Two-Player Zero-Sum Games [31.97631243571394]
We introduce a framework, LMAC, that automates the discovery of the update rule without explicit human design.
Surprisingly, even without human design, the discovered MARL algorithms achieve competitive or even better performance.
We show that LMAC is able to generalise from small games to large games, for example training on Kuhn Poker and outperforming PSRO.
arXiv Detail & Related papers (2021-06-04T22:30:25Z) - Scaling up Mean Field Games with Online Mirror Descent [55.36153467919289]
We address scaling up equilibrium computation in Mean Field Games (MFGs) using Online Mirror Descent (OMD)
We show that continuous-time OMD provably converges to a Nash equilibrium under a natural and well-motivated set of monotonicity assumptions.
A thorough experimental investigation on various single and multi-population MFGs shows that OMD outperforms traditional algorithms such as Fictitious Play (FP)
arXiv Detail & Related papers (2021-02-28T21:28:36Z) - Multi-Agent Collaboration via Reward Attribution Decomposition [75.36911959491228]
We propose Collaborative Q-learning (CollaQ) that achieves state-of-the-art performance in the StarCraft multi-agent challenge.
CollaQ is evaluated on various StarCraft Attribution maps and shows that it outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-10-16T17:42:11Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.