RaidEnv: Exploring New Challenges in Automated Content Balancing for
Boss Raid Games
- URL: http://arxiv.org/abs/2307.01676v1
- Date: Tue, 4 Jul 2023 12:07:25 GMT
- Title: RaidEnv: Exploring New Challenges in Automated Content Balancing for
Boss Raid Games
- Authors: Hyeon-Chang Jeon, In-Chang Baek, Cheong-mok Bae, Taehwa Park, Wonsang
You, Taegwan Ha, Hoyun Jung, Jinha Noh, Seungwon Oh, Kyung-Joong Kim
- Abstract summary: RaidEnv is a new game simulator that includes diverse and customizable content for the boss raid scenario in MMORPG games.
We introduce two evaluation metrics to provide guidance for AI in automatic content balancing.
- Score: 1.9851345691234763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The balance of game content significantly impacts the gaming experience.
Unbalanced game content diminishes engagement or increases frustration because
of repetitive failure. Although game designers intend to adjust the difficulty
of game content, this is a repetitive, labor-intensive, and challenging
process, especially for commercial-level games with extensive content. To
address this issue, the game research community has explored automated game
balancing using artificial intelligence (AI) techniques. However, previous
studies have focused on limited game content and did not consider the
importance of the generalization ability of playtesting agents when
encountering content changes. In this study, we propose RaidEnv, a new game
simulator that includes diverse and customizable content for the boss raid
scenario in MMORPG games. Additionally, we design two benchmarks for the boss
raid scenario that can aid in the practical application of game AI. These
benchmarks address two open problems in automatic content balancing, and we
introduce two evaluation metrics to provide guidance for AI in automatic
content balancing. This novel game research platform expands the frontiers of
automatic game balancing problems and offers a framework within a realistic
game production pipeline.
Related papers
- You Have Thirteen Hours in Which to Solve the Labyrinth: Enhancing AI Game Masters with Function Calling [35.721053667746716]
This paper presents a novel approach to enhance AI game masters by leveraging function calling in the context of the table-top role-playing game "Jim Henson's Labyrinth: The Adventure Game"
Our methodology involves integrating game-specific controls through functions, which we show improves the narrative quality and state update consistency of the AI game master.
arXiv Detail & Related papers (2024-09-11T02:03:51Z) - Personalized Dynamic Difficulty Adjustment -- Imitation Learning Meets Reinforcement Learning [44.99833362998488]
In this work, we explore balancing game difficulty using machine learning-based agents to challenge players based on their current behavior.
This is achieved by a combination of two agents, in which one learns to imitate the player, while the second is trained to beat the first.
In our demo, we investigate the proposed framework for personalized dynamic difficulty adjustment of AI agents in the context of the fighting game AI competition.
arXiv Detail & Related papers (2024-08-13T11:24:12Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from
a Minimax Game Perspective [80.51463286812314]
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features.
AT suffers from severe robust overfitting problems, particularly after learning rate (LR) decay.
We show how LR decay breaks the balance between the minimax game by empowering the trainer with a stronger memorization ability.
arXiv Detail & Related papers (2023-10-30T09:00:11Z) - Technical Challenges of Deploying Reinforcement Learning Agents for Game
Testing in AAA Games [58.720142291102135]
We describe an effort to add an experimental reinforcement learning system to an existing automated game testing solution based on scripted bots.
We show a use-case of leveraging reinforcement learning in game production and cover some of the largest time sinks anyone who wants to make the same journey for their game may encounter.
We propose a few research directions that we believe will be valuable and necessary for making machine learning, and especially reinforcement learning, an effective tool in game production.
arXiv Detail & Related papers (2023-07-19T18:19:23Z) - CommonsenseQA 2.0: Exposing the Limits of AI through Gamification [126.85096257968414]
We construct benchmarks that test the abilities of modern natural language understanding models.
In this work, we propose gamification as a framework for data construction.
arXiv Detail & Related papers (2022-01-14T06:49:15Z) - TotalBotWar: A New Pseudo Real-time Multi-action Game Challenge and
Competition for AI [62.997667081978825]
TotalBotWar is a new pseudo real-time multi-action challenge for game AI.
The game is based on the popular TotalWar games series where players manage an army to defeat the opponent's one.
arXiv Detail & Related papers (2020-09-18T09:13:56Z) - Exploring Dynamic Difficulty Adjustment in Videogames [0.0]
We will present Dynamic Difficulty Adjustment (DDA), a recently arising research topic.
DDA aims to develop an automated difficulty selection mechanism that keeps the player engaged and properly challenged.
We will present some recent research addressing this issue, as well as an overview of how to implement it.
arXiv Detail & Related papers (2020-07-06T15:05:20Z) - Gamifying the Vehicle Routing Problem with Stochastic Requests [0.0]
We consider the task of representing a classic logistics problem as a game. Then, we train agents to play it.
We show how various design features impact agents' performance, including perspective, field of view, and superhuman minimaps.
arXiv Detail & Related papers (2019-11-14T03:41:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.