Simulation-Driven Balancing of Competitive Game Levels with Reinforcement Learning
- URL: http://arxiv.org/abs/2503.18748v1
- Date: Mon, 24 Mar 2025 14:57:17 GMT
- Title: Simulation-Driven Balancing of Competitive Game Levels with Reinforcement Learning
- Authors: Florian Rupp, Manuel Eberhardinger, Kai Eckert,
- Abstract summary: We propose an architecture for automatically balancing of tile-based levels within the PCGRL framework.<n>Our architecture is divided into three parts: (1) a level generator, (2) a balancing agent, and (3) a reward modeling simulation.<n>We present improved results, explore the applicability of the method to various forms of balancing beyond equal balancing, compare the performance to another search-based approach, and discuss the application of existing fairness metrics to game balancing.
- Score: 0.2515642845381732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The balancing process for game levels in competitive two-player contexts involves a lot of manual work and testing, particularly for non-symmetrical game levels. In this work, we frame game balancing as a procedural content generation task and propose an architecture for automatically balancing of tile-based levels within the PCGRL framework (procedural content generation via reinforcement learning). Our architecture is divided into three parts: (1) a level generator, (2) a balancing agent, and (3) a reward modeling simulation. Through repeated simulations, the balancing agent receives rewards for adjusting the level towards a given balancing objective, such as equal win rates for all players. To this end, we propose new swap-based representations to improve the robustness of playability, thereby enabling agents to balance game levels more effectively and quickly compared to traditional PCGRL. By analyzing the agent's swapping behavior, we can infer which tile types have the most impact on the balance. We validate our approach in the Neural MMO (NMMO) environment in a competitive two-player scenario. In this extended conference paper, we present improved results, explore the applicability of the method to various forms of balancing beyond equal balancing, compare the performance to another search-based approach, and discuss the application of existing fairness metrics to game balancing.
Related papers
- Level the Level: Balancing Game Levels for Asymmetric Player Archetypes With Reinforcement Learning [0.28273304533873334]
This work focuses on generating balanced levels tailored to asymmetric player archetypes.
We extend a recently introduced method that uses reinforcement learning to balance tile-based game levels.
arXiv Detail & Related papers (2025-03-31T13:55:04Z) - Model as a Game: On Numerical and Spatial Consistency for Generative Games [117.36098212829766]
We revisit the paradigm of generative games to explore what truly constitutes a Model as a Game (MaaG) with a well-developed mechanism.
Based on the DiT architecture, we design two specialized modules: (1) a numerical module that integrates a LogicNet to determine event triggers, with calculations processed externally as conditions for image generation; and (2) a spatial module that maintains a map of explored areas, retrieving location-specific information during generation and linking new observations to ensure continuity.
arXiv Detail & Related papers (2025-03-27T05:46:15Z) - Beyond Win Rates: A Clustering-Based Approach to Character Balance Analysis in Team-Based Games [0.0]
Character diversity in competitive games can negatively impact player experience and strategic depth.<n>Traditional balance assessments rely on aggregate metrics like win rates and pick rates.<n>This paper proposes a novel clustering-based methodology to analyze character balance.
arXiv Detail & Related papers (2025-02-03T11:20:21Z) - Identifying and Clustering Counter Relationships of Team Compositions in PvP Games for Efficient Balance Analysis [24.683917771144536]
We develop measures to quantify balance in zero-sum competitive scenarios.
We identify useful categories of compositions and pinpoint their counter relationships.
Our framework has been validated in popular online games, including Age of Empires II, Hearthstone, Brawl Stars, and League of Legends.
arXiv Detail & Related papers (2024-08-30T10:28:36Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from
a Minimax Game Perspective [80.51463286812314]
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features.
AT suffers from severe robust overfitting problems, particularly after learning rate (LR) decay.
We show how LR decay breaks the balance between the minimax game by empowering the trainer with a stronger memorization ability.
arXiv Detail & Related papers (2023-10-30T09:00:11Z) - Balancing of competitive two-player Game Levels with Reinforcement
Learning [0.2793095554369281]
We propose an architecture for automated balancing of tile-based levels within the recently introduced PCGRL framework.
Our architecture is divided into three parts: (1) a level generator, (2) a balancing agent and, (3) a reward modeling simulation.
We show that this approach is capable to teach an agent how to alter a level for balancing better and faster than plain PCGRL.
arXiv Detail & Related papers (2023-06-07T13:40:20Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Nash Equilibria and Pitfalls of Adversarial Training in Adversarial
Robustness Games [51.90475640044073]
We study adversarial training as an alternating best-response strategy in a 2-player zero-sum game.
On the other hand, a unique pure Nash equilibrium of the game exists and is provably robust.
arXiv Detail & Related papers (2022-10-23T03:21:01Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.