DL-DDA -- Deep Learning based Dynamic Difficulty Adjustment with UX and
Gameplay constraints
- URL: http://arxiv.org/abs/2106.03075v1
- Date: Sun, 6 Jun 2021 09:47:15 GMT
- Title: DL-DDA -- Deep Learning based Dynamic Difficulty Adjustment with UX and
Gameplay constraints
- Authors: Dvir Ben Or, Michael Kolomenkin, Gil Shabat
- Abstract summary: We propose a method that automatically optimize user experience while taking into consideration other players and macro constraints imposed by the game.
We provide empirical results of an internal experiment that was done on $200,000$ and was found to outperform the corresponding manuals crafted by game design experts.
- Score: 0.8594140167290096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic difficulty adjustment ($DDA$) is a process of automatically changing
a game difficulty for the optimization of user experience. It is a vital part
of almost any modern game. Most existing DDA approaches concentrate on the
experience of a player without looking at the rest of the players. We propose a
method that automatically optimizes user experience while taking into
consideration other players and macro constraints imposed by the game. The
method is based on deep neural network architecture that involves a count loss
constraint that has zero gradients in most of its support. We suggest a method
to optimize this loss function and provide theoretical analysis for its
performance. Finally, we provide empirical results of an internal experiment
that was done on $200,000$ players and was found to outperform the
corresponding manual heuristics crafted by game design experts.
Related papers
- Personalized Dynamic Difficulty Adjustment -- Imitation Learning Meets Reinforcement Learning [44.99833362998488]
In this work, we explore balancing game difficulty using machine learning-based agents to challenge players based on their current behavior.
This is achieved by a combination of two agents, in which one learns to imitate the player, while the second is trained to beat the first.
In our demo, we investigate the proposed framework for personalized dynamic difficulty adjustment of AI agents in the context of the fighting game AI competition.
arXiv Detail & Related papers (2024-08-13T11:24:12Z) - Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from
a Minimax Game Perspective [80.51463286812314]
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features.
AT suffers from severe robust overfitting problems, particularly after learning rate (LR) decay.
We show how LR decay breaks the balance between the minimax game by empowering the trainer with a stronger memorization ability.
arXiv Detail & Related papers (2023-10-30T09:00:11Z) - Continuous Reinforcement Learning-based Dynamic Difficulty Adjustment in
a Visual Working Memory Game [5.857929080874288]
Reinforcement Learning (RL) methods have been employed for Dynamic Difficulty Adjustment (DDA) in non-competitive games.
We propose a continuous RL-based DDA methodology for a visual working memory (VWM) game to handle the complex search space for the difficulty of memorization.
arXiv Detail & Related papers (2023-08-24T12:05:46Z) - Personalized Game Difficulty Prediction Using Factorization Machines [0.9558392439655011]
We contribute a new approach for personalized difficulty estimation of game levels, borrowing methods from content recommendation.
We are able to predict difficulty as the number of attempts a player requires to pass future game levels, based on observed attempt counts from earlier levels and levels played by others.
Our results suggest that FMs are a promising tool enabling game designers to both optimize player experience and learn more about their players and the game.
arXiv Detail & Related papers (2022-09-06T08:03:46Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - No-Regret Learning in Time-Varying Zero-Sum Games [99.86860277006318]
Learning from repeated play in a fixed zero-sum game is a classic problem in game theory and online learning.
We develop a single parameter-free algorithm that simultaneously enjoys favorable guarantees under three performance measures.
Our algorithm is based on a two-layer structure with a meta-algorithm learning over a group of black-box base-learners satisfying a certain property.
arXiv Detail & Related papers (2022-01-30T06:10:04Z) - Dynamic Difficulty Adjustment in Virtual Reality Exergames through
Experience-driven Procedural Content Generation [0.4899818550820576]
We propose to use experience-driven Procedural Content Generation for DDA in VR exercise games.
We implement an initial prototype in which the player must traverse a maze that includes several exercise rooms.
To match the player's capabilities, we use Deep Reinforcement Learning to adjust the structure of the maze.
arXiv Detail & Related papers (2021-08-19T16:06:16Z) - Fast Game Content Adaptation Through Bayesian-based Player Modelling [6.510061176722249]
This paper explores a novel method to realize this goal in the context of dynamic difficulty adjustment (DDA)
The aim is to constantly adapt the content of a game to the skill level of the player, keeping them engaged by avoiding states that are either too difficult or too easy.
Current systems for DDA rely on expensive data mining, or on hand-crafted rules designed for particular domains, and usually adapts to keep players in the flow.
arXiv Detail & Related papers (2021-05-18T12:56:44Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z) - Provable Self-Play Algorithms for Competitive Reinforcement Learning [48.12602400021397]
We study self-play in competitive reinforcement learning under the setting of Markov games.
We show that a self-play algorithm achieves regret $tildemathcalO(sqrtT)$ after playing $T$ steps of the game.
We also introduce an explore-then-exploit style algorithm, which achieves a slightly worse regret $tildemathcalO(T2/3)$, but is guaranteed to run in time even in the worst case.
arXiv Detail & Related papers (2020-02-10T18:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.