Improving Deep Localized Level Analysis: How Game Logs Can Help
- URL: http://arxiv.org/abs/2212.03376v1
- Date: Wed, 7 Dec 2022 00:05:16 GMT
- Title: Improving Deep Localized Level Analysis: How Game Logs Can Help
- Authors: Natalie Bombardieri, Matthew Guzdial
- Abstract summary: We present novel improvements to affect prediction by using a deep convolutional neural network (CNN) to predict player experience.
We test our approach on levels based on Super Mario Bros. (Infinite Mario Bros.) and Super Mario Bros.: The Lost Levels (Gwario)
- Score: 0.9645196221785693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Player modelling is the field of study associated with understanding players.
One pursuit in this field is affect prediction: the ability to predict how a
game will make a player feel. We present novel improvements to affect
prediction by using a deep convolutional neural network (CNN) to predict player
experience trained on game event logs in tandem with localized level structure
information. We test our approach on levels based on Super Mario Bros.
(Infinite Mario Bros.) and Super Mario Bros.: The Lost Levels (Gwario), as well
as original Super Mario Bros. levels. We outperform prior work, and demonstrate
the utility of training on player logs, even when lacking them at test time for
cross-domain player modelling.
Related papers
- CNN-based Game State Detection for a Foosball Table [1.612440288407791]
In the game of Foosball, a compact and comprehensive game state description consists of the positional shifts and rotations of the figures and the position of the ball over time.
In this paper, a figure detection system to determine the game state in Foosball is presented.
This dataset is utilized to train Convolutional Neural Network (CNN) based end-to-end regression models to predict the rotations and shifts of each rod.
arXiv Detail & Related papers (2024-04-08T09:48:02Z) - Optimizing Mario Adventures in a Constrained Environment [0.0]
We learn playing Super Mario Bros. using Genetic Algorithm (MarioGA) and NeuroEvolution (MarioNE) techniques.
We formalise the SMB agent to maximize the total value of collected coins (reward) and maximising the total distance traveled (reward)
We provide a fivefold comparative analysis by plotting fitness plots, ability to finish different levels of world 1, and domain adaptation (transfer learning) of the trained models.
arXiv Detail & Related papers (2023-12-14T08:45:26Z) - Estimating player completion rate in mobile puzzle games using
reinforcement learning [0.0]
We train an RL agent and measure the number of moves required to complete a level.
This is then compared to the level completion rate of a large sample of real players.
We find that the strongest predictor of player completion rate for a level is the number of moves taken to complete a level of the 5% best runs of the agent on a given level.
arXiv Detail & Related papers (2023-06-26T12:00:05Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Personalized Game Difficulty Prediction Using Factorization Machines [0.9558392439655011]
We contribute a new approach for personalized difficulty estimation of game levels, borrowing methods from content recommendation.
We are able to predict difficulty as the number of attempts a player requires to pass future game levels, based on observed attempt counts from earlier levels and levels played by others.
Our results suggest that FMs are a promising tool enabling game designers to both optimize player experience and learn more about their players and the game.
arXiv Detail & Related papers (2022-09-06T08:03:46Z) - An Empirical Study on the Generalization Power of Neural Representations
Learned via Visual Guessing Games [79.23847247132345]
This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA)
We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL)
arXiv Detail & Related papers (2021-01-31T10:30:48Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Controllable Level Blending between Games using Variational Autoencoders [6.217860411034386]
We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games.
We then use this space to generate level segments that combine properties of levels from both games.
We argue that these affordances make the VAE-based approach especially suitable for co-creative level design.
arXiv Detail & Related papers (2020-02-27T01:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.