Generating Game Levels of Diverse Behaviour Engagement
- URL: http://arxiv.org/abs/2207.02100v1
- Date: Tue, 5 Jul 2022 15:08:12 GMT
- Title: Generating Game Levels of Diverse Behaviour Engagement
- Authors: Keyuan Zhang, Jiayu Bai, Jialin Liu
- Abstract summary: Experimental studies on emphSuper Mario Bros. indicate that using the same evaluation metrics but agents with different personas can generate levels for particular persona.
It implies that, for simple games, using a game-playing agent of specific player archetype as a level tester is probably all we need to generate levels of diverse behaviour engagement.
- Score: 2.5739833468005595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years, there has been growing interests in experience-driven
procedural level generation. Various metrics have been formulated to model
player experience and help generate personalised levels. In this work, we
question whether experience metrics can adapt to agents with different
personas. We start by reviewing existing metrics for evaluating game levels.
Then, focusing on platformer games, we design a framework integrating various
agents and evaluation metrics. Experimental studies on \emph{Super Mario Bros.}
indicate that using the same evaluation metrics but agents with different
personas can generate levels for particular persona. It implies that, for
simple games, using a game-playing agent of specific player archetype as a
level tester is probably all we need to generate levels of diverse behaviour
engagement.
Related papers
- AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - Character-LLM: A Trainable Agent for Role-Playing [67.35139167985008]
Large language models (LLMs) can be used to serve as agents to simulate human behaviors.
We introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc.
arXiv Detail & Related papers (2023-10-16T07:58:56Z) - GameEval: Evaluating LLMs on Conversational Games [93.40433639746331]
We propose GameEval, a novel approach to evaluating large language models (LLMs)
GameEval treats LLMs as game players and assigns them distinct roles with specific goals achieved by launching conversations of various forms.
We show that GameEval can effectively differentiate the capabilities of various LLMs, providing a comprehensive assessment of their integrated abilities to solve complex problems.
arXiv Detail & Related papers (2023-08-19T14:33:40Z) - Estimating player completion rate in mobile puzzle games using
reinforcement learning [0.0]
We train an RL agent and measure the number of moves required to complete a level.
This is then compared to the level completion rate of a large sample of real players.
We find that the strongest predictor of player completion rate for a level is the number of moves taken to complete a level of the 5% best runs of the agent on a given level.
arXiv Detail & Related papers (2023-06-26T12:00:05Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Towards Objective Metrics for Procedurally Generated Video Game Levels [2.320417845168326]
We introduce two simulation-based evaluation metrics to measure the diversity and difficulty of generated levels.
We demonstrate that our diversity metric is more robust to changes in level size and representation than current methods.
The difficulty metric shows promise, as it correlates with existing estimates of difficulty in one of the tested domains, but it does face some challenges in the other domain.
arXiv Detail & Related papers (2022-01-25T14:13:50Z) - Adapting Procedural Content Generation to Player Personas Through
Evolution [0.0]
We propose an architecture using persona agents and experience metrics, which enables evolving procedurally generated levels tailored for particular player personas.
Using our game, "Grave Rave", we demonstrate that this approach successfully adapts to four rule-based persona agents over three different experience metrics.
arXiv Detail & Related papers (2021-12-07T16:26:33Z) - Illuminating Mario Scenes in the Latent Space of a Generative
Adversarial Network [11.055580854275474]
We show how designers may specify gameplay measures to our system and extract high-quality (playable) levels with a diverse range of level mechanics.
An online user study shows how the different mechanics of the automatically generated levels affect subjective ratings of their perceived difficulty and appearance.
arXiv Detail & Related papers (2020-07-11T03:38:06Z) - Learning to Generate Levels From Nothing [5.2508303190856624]
We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
arXiv Detail & Related papers (2020-02-12T22:07:23Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.