Continuous Reinforcement Learning-based Dynamic Difficulty Adjustment in
a Visual Working Memory Game
- URL: http://arxiv.org/abs/2308.12726v1
- Date: Thu, 24 Aug 2023 12:05:46 GMT
- Title: Continuous Reinforcement Learning-based Dynamic Difficulty Adjustment in
a Visual Working Memory Game
- Authors: Masoud Rahimi, Hadi Moradi, Abdol-hossein Vahabie, Hamed Kebriaei
- Abstract summary: Reinforcement Learning (RL) methods have been employed for Dynamic Difficulty Adjustment (DDA) in non-competitive games.
We propose a continuous RL-based DDA methodology for a visual working memory (VWM) game to handle the complex search space for the difficulty of memorization.
- Score: 5.857929080874288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Difficulty Adjustment (DDA) is a viable approach to enhance a
player's experience in video games. Recently, Reinforcement Learning (RL)
methods have been employed for DDA in non-competitive games; nevertheless, they
rely solely on discrete state-action space with a small search space. In this
paper, we propose a continuous RL-based DDA methodology for a visual working
memory (VWM) game to handle the complex search space for the difficulty of
memorization. The proposed RL-based DDA tailors game difficulty based on the
player's score and game difficulty in the last trial. We defined a continuous
metric for the difficulty of memorization. Then, we consider the task
difficulty and the vector of difficulty-score as the RL's action and state,
respectively. We evaluated the proposed method through a within-subject
experiment involving 52 subjects. The proposed approach was compared with two
rule-based difficulty adjustment methods in terms of player's score and game
experience measured by a questionnaire. The proposed RL-based approach resulted
in a significantly better game experience in terms of competence, tension, and
negative and positive affect. Players also achieved higher scores and win
rates. Furthermore, the proposed RL-based DDA led to a significantly less
decline in the score in a 20-trial session.
Related papers
- Personalized Dynamic Difficulty Adjustment -- Imitation Learning Meets Reinforcement Learning [44.99833362998488]
In this work, we explore balancing game difficulty using machine learning-based agents to challenge players based on their current behavior.
This is achieved by a combination of two agents, in which one learns to imitate the player, while the second is trained to beat the first.
In our demo, we investigate the proposed framework for personalized dynamic difficulty adjustment of AI agents in the context of the fighting game AI competition.
arXiv Detail & Related papers (2024-08-13T11:24:12Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from
a Minimax Game Perspective [80.51463286812314]
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features.
AT suffers from severe robust overfitting problems, particularly after learning rate (LR) decay.
We show how LR decay breaks the balance between the minimax game by empowering the trainer with a stronger memorization ability.
arXiv Detail & Related papers (2023-10-30T09:00:11Z) - Personalized Game Difficulty Prediction Using Factorization Machines [0.9558392439655011]
We contribute a new approach for personalized difficulty estimation of game levels, borrowing methods from content recommendation.
We are able to predict difficulty as the number of attempts a player requires to pass future game levels, based on observed attempt counts from earlier levels and levels played by others.
Our results suggest that FMs are a promising tool enabling game designers to both optimize player experience and learn more about their players and the game.
arXiv Detail & Related papers (2022-09-06T08:03:46Z) - A Ranking Game for Imitation Learning [22.028680861819215]
We treat imitation as a two-player ranking-based Stackelberg game between a $textitpolicy$ and a $textitreward$ function.
This game encompasses a large subset of both inverse reinforcement learning (IRL) methods and methods which learn from offline preferences.
We theoretically analyze the requirements of the loss function used for ranking policy performances to facilitate near-optimal imitation learning at equilibrium.
arXiv Detail & Related papers (2022-02-07T19:38:22Z) - DL-DDA -- Deep Learning based Dynamic Difficulty Adjustment with UX and
Gameplay constraints [0.8594140167290096]
We propose a method that automatically optimize user experience while taking into consideration other players and macro constraints imposed by the game.
We provide empirical results of an internal experiment that was done on $200,000$ and was found to outperform the corresponding manuals crafted by game design experts.
arXiv Detail & Related papers (2021-06-06T09:47:15Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - An Empirical Study on the Generalization Power of Neural Representations
Learned via Visual Guessing Games [79.23847247132345]
This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA)
We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL)
arXiv Detail & Related papers (2021-01-31T10:30:48Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.