Across-Game Engagement Modelling via Few-Shot Learning
- URL: http://arxiv.org/abs/2409.13002v1
- Date: Thu, 19 Sep 2024 16:21:21 GMT
- Title: Across-Game Engagement Modelling via Few-Shot Learning
- Authors: Kosmas Pinitas, Konstantinos Makantasis, Georgios N. Yannakakis,
- Abstract summary: Domain generalisation involves learning AI models that can maintain high performance across diverse domains.
Video games present unique challenges and opportunities for the analysis of user experience.
We introduce a framework that decomposes the general domain-agnostic modelling of user experience into several domain-specific and game-dependent tasks.
- Score: 1.7969777786551424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalisation involves learning artificial intelligence (AI) models that can maintain high performance across diverse domains within a specific task. In video games, for instance, such AI models can supposedly learn to detect player actions across different games. Despite recent advancements in AI, domain generalisation for modelling the users' experience remains largely unexplored. While video games present unique challenges and opportunities for the analysis of user experience -- due to their dynamic and rich contextual nature -- modelling such experiences is limited by generally small datasets. As a result, conventional modelling methods often struggle to bridge the domain gap between users and games due to their reliance on large labelled training data and assumptions of common distributions of user experience. In this paper, we tackle this challenge by introducing a framework that decomposes the general domain-agnostic modelling of user experience into several domain-specific and game-dependent tasks that can be solved via few-shot learning. We test our framework on a variation of the publicly available GameVibe corpus, designed specifically to test a model's ability to predict user engagement across different first-person shooter games. Our findings demonstrate the superior performance of few-shot learners over traditional modelling methods and thus showcase the potential of few-shot learning for robust experience modelling in video games and beyond.
Related papers
- AI-Spectra: A Visual Dashboard for Model Multiplicity to Enhance Informed and Transparent Decision-Making [1.860042727037436]
We present an approach, AI-Spectra, to leverage model multiplicity for interactive systems.
Model multiplicity means using slightly different AI models yielding equally valid outcomes or predictions for the same task.
We use a custom adaptation of Chernoff faces for AI-Spectra; Chernoff Bots.
arXiv Detail & Related papers (2024-11-14T18:50:41Z) - Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations [52.11801730860999]
In recent years, the robot learning community has shown increasing interest in using deep generative models to capture the complexity of large datasets.
We present the different types of models that the community has explored, such as energy-based models, diffusion models, action value maps, or generative adversarial networks.
We also present the different types of applications in which deep generative models have been used, from grasp generation to trajectory generation or cost learning.
arXiv Detail & Related papers (2024-08-08T11:34:31Z) - Difficulty Modelling in Mobile Puzzle Games: An Empirical Study on
Different Methods to Combine Player Analytics and Simulated Data [0.0]
A common practice consists of creating metrics out of data collected by player interactions with the content.
This allows for estimation only after the content is released and does not consider the characteristics of potential future players.
In this article, we present a number of potential solutions for the estimation of difficulty under such conditions.
arXiv Detail & Related papers (2024-01-30T20:51:42Z) - Pre-trained Recommender Systems: A Causal Debiasing Perspective [19.712997823535066]
We develop a generic recommender that captures universal interaction patterns by training on generic user-item interaction data extracted from different domains.
Our empirical studies show that the proposed model could significantly improve the recommendation performance in zero- and few-shot learning settings.
arXiv Detail & Related papers (2023-10-30T03:37:32Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Pre-training Contextualized World Models with In-the-wild Videos for
Reinforcement Learning [54.67880602409801]
In this paper, we study the problem of pre-training world models with abundant in-the-wild videos for efficient learning of visual control tasks.
We introduce Contextualized World Models (ContextWM) that explicitly separate context and dynamics modeling.
Our experiments show that in-the-wild video pre-training equipped with ContextWM can significantly improve the sample efficiency of model-based reinforcement learning.
arXiv Detail & Related papers (2023-05-29T14:29:12Z) - Multi-Modal Experience Inspired AI Creation [33.34566822058209]
We study how to generate texts based on sequential multi-modal information.
We firstly design a multi-channel sequence-to-sequence architecture equipped with a multi-modal attention network.
We then propose a curriculum negative sampling strategy tailored for the sequential inputs.
arXiv Detail & Related papers (2022-09-02T11:50:41Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Invariant Feature Learning for Sensor-based Human Activity Recognition [11.334750079923428]
We present an invariant feature learning framework (IFLF) that extracts common information shared across subjects and devices.
Experiments demonstrated that IFLF is effective in handling both subject and device diversion across popular open datasets and an in-house dataset.
arXiv Detail & Related papers (2020-12-14T21:56:17Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.