Label-Free Subjective Player Experience Modelling via Let's Play Videos
- URL: http://arxiv.org/abs/2410.02967v1
- Date: Thu, 3 Oct 2024 20:12:56 GMT
- Title: Label-Free Subjective Player Experience Modelling via Let's Play Videos
- Authors: Dave Goel, Athar Mahmoudi-Nejad, Matthew Guzdial,
- Abstract summary: Player Experience Modelling (PEM) is the study of AI techniques applied to modelling a player's experience within a video game.
We propose a novel PEM development approach, approximating player experience from gameplay video.
We evaluate this approach predicting affect in the game Angry Birds via a human subject study.
- Score: 2.3941497253612085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Player Experience Modelling (PEM) is the study of AI techniques applied to modelling a player's experience within a video game. PEM development can be labour-intensive, requiring expert hand-authoring or specialized data collection. In this work, we propose a novel PEM development approach, approximating player experience from gameplay video. We evaluate this approach predicting affect in the game Angry Birds via a human subject study. We validate that our PEM can strongly correlate with self-reported and sensor measures of affect, demonstrating the potential of this approach.
Related papers
- Across-Game Engagement Modelling via Few-Shot Learning [1.7969777786551424]
Domain generalisation involves learning AI models that can maintain high performance across diverse domains.
Video games present unique challenges and opportunities for the analysis of user experience.
We introduce a framework that decomposes the general domain-agnostic modelling of user experience into several domain-specific and game-dependent tasks.
arXiv Detail & Related papers (2024-09-19T16:21:21Z) - ExpertAF: Expert Actionable Feedback from Video [81.46431188306397]
We introduce a novel method to generate actionable feedback from video of a person doing a physical activity.
Our method takes a video demonstration and its accompanying 3D body pose and generates expert commentary.
Our method is able to reason across multi-modal input combinations to output full-spectrum, actionable coaching.
arXiv Detail & Related papers (2024-08-01T16:13:07Z) - GameEval: Evaluating LLMs on Conversational Games [93.40433639746331]
We propose GameEval, a novel approach to evaluating large language models (LLMs)
GameEval treats LLMs as game players and assigns them distinct roles with specific goals achieved by launching conversations of various forms.
We show that GameEval can effectively differentiate the capabilities of various LLMs, providing a comprehensive assessment of their integrated abilities to solve complex problems.
arXiv Detail & Related papers (2023-08-19T14:33:40Z) - Improving Language Model Negotiation with Self-Play and In-Context
Learning from AI Feedback [97.54519989641388]
We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing.
Only a subset of the language models we consider can self-play and improve the deal price from AI feedback.
arXiv Detail & Related papers (2023-05-17T11:55:32Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Leveraging Cluster Analysis to Understand Educational Game Player
Experiences and Support Design [3.07869141026886]
The ability for an educational game designer to understand their audience's play styles is an essential tool for improving their game's design.
We present a simple, reusable process using best practices for data clustering, feasible for use within a small educational game studio.
arXiv Detail & Related papers (2022-10-18T14:51:15Z) - WinoGAViL: Gamified Association Benchmark to Challenge
Vision-and-Language Models [91.92346150646007]
In this work, we introduce WinoGAViL: an online game to collect vision-and-language associations.
We use the game to collect 3.5K instances, finding that they are intuitive for humans but challenging for state-of-the-art AI models.
Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills.
arXiv Detail & Related papers (2022-07-25T23:57:44Z) - An Appraisal Transition System for Event-driven Emotions in Agent-based
Player Experience Testing [9.26240699624761]
We propose an automated player experience testing approach by suggesting a formal model of event-based emotions.
A working prototype of the model is integrated on top of Aplib, a tactical agent programming library, to create intelligent PX test agents.
arXiv Detail & Related papers (2021-05-12T11:09:35Z) - Towards Action Model Learning for Player Modeling [1.9659095632676098]
Player modeling attempts to create a computational model which accurately approximates a player's behavior in a game.
Most player modeling techniques rely on domain knowledge and are not transferable across games.
We present our findings with using action model learning (AML) to learn a player model in a domain-agnostic manner.
arXiv Detail & Related papers (2021-03-09T19:32:30Z) - Player Modeling via Multi-Armed Bandits [6.64975374754221]
We present a novel approach to player modeling based on multi-armed bandits (MABs)
We present an approach to evaluating and fine-tuning these algorithms prior to generating data in a user study.
arXiv Detail & Related papers (2021-02-10T05:04:45Z) - An Empirical Study on the Generalization Power of Neural Representations
Learned via Visual Guessing Games [79.23847247132345]
This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA)
We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL)
arXiv Detail & Related papers (2021-01-31T10:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.