Configurable Agent With Reward As Input: A Play-Style Continuum
Generation
- URL: http://arxiv.org/abs/2211.16221v1
- Date: Tue, 29 Nov 2022 13:59:25 GMT
- Title: Configurable Agent With Reward As Input: A Play-Style Continuum
Generation
- Authors: Pierre Le Pelletier de Woillemont, R\'emi Labory and Vincent Corruble
- Abstract summary: We present a video game environment which lets us define multiple play-styles.
We then introduce CARI: a Reinforcement Learning agent able to simulate a wide range of play-styles.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Modern video games are becoming richer and more complex in terms of game
mechanics. This complexity allows for the emergence of a wide variety of ways
to play the game across the players. From the point of view of the game
designer, this means that one needs to anticipate a lot of different ways the
game could be played. Machine Learning (ML) could help address this issue. More
precisely, Reinforcement Learning is a promising answer to the need of
automating video game testing. In this paper we present a video game
environment which lets us define multiple play-styles. We then introduce CARI:
a Configurable Agent with Reward as Input. An agent able to simulate a wide
continuum range of play-styles. It is not constrained to extreme archetypal
behaviors like current methods using reward shaping. In addition it achieves
this through a single training loop, instead of the usual one loop per
play-style. We compare this novel training approach with the more classic
reward shaping approach and conclude that CARI can also outperform the baseline
on archetypes generation. This novel agent could be used to investigate
behaviors and balancing during the production of a video game with a realistic
amount of training time.
Related papers
- Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Automated Play-Testing Through RL Based Human-Like Play-Styles
Generation [0.0]
Reinforcement Learning is a promising answer to the need of automating video game testing.
We present CARMI: a.
Agent with Relative Metrics as Input.
An agent able to emulate the players play-styles, even on previously unseen levels.
arXiv Detail & Related papers (2022-11-29T14:17:20Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - An Unsupervised Video Game Playstyle Metric via State Discretization [20.48689549093258]
We propose the first metric for video game playstyles directly from the game observations and actions.
Our proposed method is built upon a novel scheme of learning discrete representations.
We demonstrate high playstyle accuracy of our metric in experiments on some video game platforms.
arXiv Detail & Related papers (2021-10-03T08:30:51Z) - Policy Fusion for Adaptive and Customizable Reinforcement Learning
Agents [137.86426963572214]
We show how to combine distinct behavioral policies to obtain a meaningful "fusion" policy.
We propose four different policy fusion methods for combining pre-trained policies.
We provide several practical examples and use-cases for how these methods are indeed useful for video game production and designers.
arXiv Detail & Related papers (2021-04-21T16:08:44Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Reinforcement Learning Agents for Ubisoft's Roller Champions [0.26249027950824505]
We present our RL system for Ubisoft's Roller Champions, a 3v3 Competitive Multiplayer Sports Game played on an oval-shaped skating arena.
Our system is designed to keep up with agile, fast-paced development, taking 1--4 days to train a new model following gameplay changes.
We observe that the AIs develop sophisticated co-ordinated strategies, and can aid in balancing the game as an added bonus.
arXiv Detail & Related papers (2020-12-10T23:53:15Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.