Automated Play-Testing Through RL Based Human-Like Play-Styles
Generation
- URL: http://arxiv.org/abs/2211.17188v1
- Date: Tue, 29 Nov 2022 14:17:20 GMT
- Title: Automated Play-Testing Through RL Based Human-Like Play-Styles
Generation
- Authors: Pierre Le Pelletier de Woillemont, R\'emi Labory, Vincent Corruble
- Abstract summary: Reinforcement Learning is a promising answer to the need of automating video game testing.
We present CARMI: a.
Agent with Relative Metrics as Input.
An agent able to emulate the players play-styles, even on previously unseen levels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The increasing complexity of gameplay mechanisms in modern video games is
leading to the emergence of a wider range of ways to play games. The variety of
possible play-styles needs to be anticipated by designers, through automated
tests. Reinforcement Learning is a promising answer to the need of automating
video game testing. To that effect one needs to train an agent to play the
game, while ensuring this agent will generate the same play-styles as the
players in order to give meaningful feedback to the designers. We present
CARMI: a Configurable Agent with Relative Metrics as Input. An agent able to
emulate the players play-styles, even on previously unseen levels. Unlike
current methods it does not rely on having full trajectories, but only summary
data. Moreover it only requires little human data, thus compatible with the
constraints of modern video game production. This novel agent could be used to
investigate behaviors and balancing during the production of a video game with
a realistic amount of training time.
Related papers
- Behavioural Cloning in VizDoom [1.4999444543328293]
This paper describes methods for training autonomous agents to play the game "Doom 2" through Imitation Learning (IL)
We also explore how Reinforcement Learning (RL) compares to IL for humanness by comparing camera movement and trajectory data.
arXiv Detail & Related papers (2024-01-08T16:15:43Z) - Preference-conditioned Pixel-based AI Agent For Game Testing [1.5059676044537105]
Game-testing AI agents that learn by interaction with the environment have the potential to mitigate these challenges.
This paper proposes an agent design that mainly depends on pixel-based state observations while exploring the environment conditioned on a user's preference.
Our agent significantly outperforms state-of-the-art pixel-based game testing agents over exploration coverage and test execution quality when evaluated on a complex open-world environment resembling many aspects of real AAA games.
arXiv Detail & Related papers (2023-08-18T04:19:36Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Configurable Agent With Reward As Input: A Play-Style Continuum
Generation [0.0]
We present a video game environment which lets us define multiple play-styles.
We then introduce CARI: a Reinforcement Learning agent able to simulate a wide range of play-styles.
arXiv Detail & Related papers (2022-11-29T13:59:25Z) - Generative Personas That Behave and Experience Like Humans [3.611888922173257]
generative AI agents attempt to imitate particular playing behaviors represented as rules, rewards, or human demonstrations.
We extend the notion of behavioral procedural personas to cater for player experience, thus examining generative agents that can both behave and experience their game as humans would.
Our findings suggest that the generated agents exhibit distinctive play styles and experience responses of the human personas they were designed to imitate.
arXiv Detail & Related papers (2022-08-26T12:04:53Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - Predicting Game Difficulty and Churn Without Players [0.0]
We propose a novel simulation model that is able to predict the per-level churn and pass rates of Angry Birds Dream Blast.
Our work demonstrates that player behavior predictions produced by DRL gameplay can be significantly improved by even a very simple population-level simulation of individual player differences.
arXiv Detail & Related papers (2020-08-29T08:37:47Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Model-Based Reinforcement Learning for Atari [89.3039240303797]
We show how video prediction models can enable agents to solve Atari games with fewer interactions than model-free methods.
Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment.
arXiv Detail & Related papers (2019-03-01T15:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.