Level the Level: Balancing Game Levels for Asymmetric Player Archetypes With Reinforcement Learning
- URL: http://arxiv.org/abs/2503.24099v1
- Date: Mon, 31 Mar 2025 13:55:04 GMT
- Title: Level the Level: Balancing Game Levels for Asymmetric Player Archetypes With Reinforcement Learning
- Authors: Florian Rupp, Kai Eckert,
- Abstract summary: This work focuses on generating balanced levels tailored to asymmetric player archetypes.<n>We extend a recently introduced method that uses reinforcement learning to balance tile-based game levels.
- Score: 0.28273304533873334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Balancing games, especially those with asymmetric multiplayer content, requires significant manual effort and extensive human playtesting during development. For this reason, this work focuses on generating balanced levels tailored to asymmetric player archetypes, where the disparity in abilities is balanced entirely through the level design. For instance, while one archetype may have an advantage over another, both should have an equal chance of winning. We therefore conceptualize game balancing as a procedural content generation problem and build on and extend a recently introduced method that uses reinforcement learning to balance tile-based game levels. We evaluate the method on four different player archetypes and demonstrate its ability to balance a larger proportion of levels compared to two baseline approaches. Furthermore, our results indicate that as the disparity between player archetypes increases, the required number of training steps grows, while the model's accuracy in achieving balance decreases.
Related papers
- Simulation-Driven Balancing of Competitive Game Levels with Reinforcement Learning [0.2515642845381732]
We propose an architecture for automatically balancing of tile-based levels within the PCGRL framework.<n>Our architecture is divided into three parts: (1) a level generator, (2) a balancing agent, and (3) a reward modeling simulation.<n>We present improved results, explore the applicability of the method to various forms of balancing beyond equal balancing, compare the performance to another search-based approach, and discuss the application of existing fairness metrics to game balancing.
arXiv Detail & Related papers (2025-03-24T14:57:17Z) - Scalable Reinforcement Post-Training Beyond Static Human Prompts: Evolving Alignment via Asymmetric Self-Play [52.3079697845254]
eva is the first method that allows language models to adaptively create training prompts in both offline and online RL post-training.
We show eva can create effective RL curricula and is robust across ablations.
arXiv Detail & Related papers (2024-10-31T08:15:32Z) - Identifying and Clustering Counter Relationships of Team Compositions in PvP Games for Efficient Balance Analysis [24.683917771144536]
We develop measures to quantify balance in zero-sum competitive scenarios.
We identify useful categories of compositions and pinpoint their counter relationships.
Our framework has been validated in popular online games, including Age of Empires II, Hearthstone, Brawl Stars, and League of Legends.
arXiv Detail & Related papers (2024-08-30T10:28:36Z) - SkillMimic: Learning Basketball Interaction Skills from Demonstrations [85.23012579911378]
We introduce SkillMimic, a unified data-driven framework that fundamentally changes how agents learn interaction skills.<n>Our key insight is that a unified HOI imitation reward can effectively capture the essence of diverse interaction patterns from HOI datasets.<n>For evaluation, we collect and introduce two basketball datasets containing approximately 35 minutes of diverse basketball skills.
arXiv Detail & Related papers (2024-08-12T15:19:04Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from
a Minimax Game Perspective [80.51463286812314]
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features.
AT suffers from severe robust overfitting problems, particularly after learning rate (LR) decay.
We show how LR decay breaks the balance between the minimax game by empowering the trainer with a stronger memorization ability.
arXiv Detail & Related papers (2023-10-30T09:00:11Z) - Balancing of competitive two-player Game Levels with Reinforcement
Learning [0.2793095554369281]
We propose an architecture for automated balancing of tile-based levels within the recently introduced PCGRL framework.
Our architecture is divided into three parts: (1) a level generator, (2) a balancing agent and, (3) a reward modeling simulation.
We show that this approach is capable to teach an agent how to alter a level for balancing better and faster than plain PCGRL.
arXiv Detail & Related papers (2023-06-07T13:40:20Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - Formalizing the Generalization-Forgetting Trade-off in Continual
Learning [1.370633147306388]
We model the trade-off between catastrophic forgetting and generalization as a two-player sequential game.
We show theoretically that a balance point between the two players exists for each task and that this point is stable.
Next, we introduce balanced continual learning (BCL), which is designed to attain balance between generalization and forgetting.
arXiv Detail & Related papers (2021-09-28T20:39:04Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Metagame Autobalancing for Competitive Multiplayer Games [0.10499611180329801]
We present a tool for balancing multi-player games during game design.
Our approach requires a designer to construct an intuitive graphical representation of their meta-game target.
We show the capabilities of this tool on examples inheriting from Rock-Paper-Scissors, and on a more complex asymmetric fighting game.
arXiv Detail & Related papers (2020-06-08T08:55:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.