Pick Your Battles: Interaction Graphs as Population-Level Objectives for
Strategic Diversity
- URL: http://arxiv.org/abs/2110.04041v1
- Date: Fri, 8 Oct 2021 11:29:52 GMT
- Title: Pick Your Battles: Interaction Graphs as Population-Level Objectives for
Strategic Diversity
- Authors: Marta Garnelo, Wojciech Marian Czarnecki, Siqi Liu, Dhruva Tirumala,
Junhyuk Oh, Gauthier Gidel, Hado van Hasselt, David Balduzzi
- Abstract summary: We study how to construct diverse populations of agents by carefully structuring how individuals within a population interact.
Our approach is based on interaction graphs, which control the flow of information between agents during training.
We provide evidence for the importance of diversity in multi-agent training and analyse the effect of applying different interaction graphs on the training trajectories, diversity and performance of populations in a range of games.
- Score: 49.68758494467258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Strategic diversity is often essential in games: in multi-player games, for
example, evaluating a player against a diverse set of strategies will yield a
more accurate estimate of its performance. Furthermore, in games with
non-transitivities diversity allows a player to cover several winning
strategies. However, despite the significance of strategic diversity, training
agents that exhibit diverse behaviour remains a challenge. In this paper we
study how to construct diverse populations of agents by carefully structuring
how individuals within a population interact. Our approach is based on
interaction graphs, which control the flow of information between agents during
training and can encourage agents to specialise on different strategies,
leading to improved overall performance. We provide evidence for the importance
of diversity in multi-agent training and analyse the effect of applying
different interaction graphs on the training trajectories, diversity and
performance of populations in a range of games. This is an extended version of
the long abstract published at AAMAS.
Related papers
- Measuring Diversity of Game Scenarios [15.100151112002235]
We aim to bridge the current gaps in literature and practice, offering insights into effective strategies for measuring and integrating diversity in game scenarios.
This survey not only charts a path for future research in diverse game scenarios but also serves as a handbook for industry practitioners seeking to leverage diversity as a key component of game design and development.
arXiv Detail & Related papers (2024-04-15T07:59:52Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Generating Personas for Games with Multimodal Adversarial Imitation
Learning [47.70823327747952]
Reinforcement learning has been widely successful in producing agents capable of playing games at a human level.
Going beyond reinforcement learning is necessary to model a wide range of human playstyles.
This paper presents a novel imitation learning approach to generate multiple persona policies for playtesting.
arXiv Detail & Related papers (2023-08-15T06:58:19Z) - Learning Meta Representations for Agents in Multi-Agent Reinforcement
Learning [12.170248966278281]
In multi-agent reinforcement learning, behaviors that agents learn in a single Markov Game (MG) are typically confined to the given agent number.
In this work, our focus is on creating agents that can generalize across population-varying MGs.
Instead of learning a unimodal policy, each agent learns a policy set comprising effective strategies across a variety of games.
arXiv Detail & Related papers (2021-08-30T04:30:53Z) - Unifying Behavioral and Response Diversity for Open-ended Learning in
Zero-sum Games [44.30509625560908]
In open-ended learning algorithms, there are no widely accepted definitions for diversity, making it hard to construct and evaluate the diverse policies.
We propose a unified measure of diversity in multi-agent open-ended learning based on both Behavioral Diversity (BD) and Response Diversity (RD)
We show that many current diversity measures fall in one of the categories of BD or RD but not both.
With this unified diversity measure, we design the corresponding diversity-promoting objective and population effectivity when seeking the best responses in open-ended learning.
arXiv Detail & Related papers (2021-06-09T10:11:06Z) - Policy Fusion for Adaptive and Customizable Reinforcement Learning
Agents [137.86426963572214]
We show how to combine distinct behavioral policies to obtain a meaningful "fusion" policy.
We propose four different policy fusion methods for combining pre-trained policies.
We provide several practical examples and use-cases for how these methods are indeed useful for video game production and designers.
arXiv Detail & Related papers (2021-04-21T16:08:44Z) - Modelling Behavioural Diversity for Learning in Open-Ended Games [15.978932309579013]
We offer a geometric interpretation of behavioural diversity in games.
We introduce a novel diversity metric based on emphdeterminantal point processes (DPP)
We prove the uniqueness of the diverse best response and the convergence of our algorithms on two-player games.
arXiv Detail & Related papers (2021-03-14T13:42:39Z) - Quantifying environment and population diversity in multi-agent
reinforcement learning [7.548322030720646]
Generalization is a major challenge for multi-agent reinforcement learning.
In this paper, we investigate and quantify the relationship between generalization and diversity in the multi-agent domain.
To better understand the effects of co-player variation, our experiments introduce a new environment-agnostic measure of behavioral diversity.
arXiv Detail & Related papers (2021-02-16T18:54:39Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.