NTRL: Encounter Generation via Reinforcement Learning for Dynamic Difficulty Adjustment in Dungeons and Dragons
- URL: http://arxiv.org/abs/2506.19530v2
- Date: Wed, 16 Jul 2025 21:26:44 GMT
- Title: NTRL: Encounter Generation via Reinforcement Learning for Dynamic Difficulty Adjustment in Dungeons and Dragons
- Authors: Carlo Romeo, Andrew D. Bagdanov,
- Abstract summary: Encounter Generation via Reinforcement Learning (NTRL) is a novel approach that automates Dynamic Difficulty Adjustment (DDA) in Dungeons & Dragons (D&D)<n>By framing the problem as a contextual bandit, NTRL generates encounters based on real-time party members attributes.<n>In comparison with classic DMs, NTRL iteratively optimize encounters to extend combat longevity (+200%), increases damage dealt to party members, reducing post-combat hit points, and raises the number of player deaths.
- Score: 8.856568375969848
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Balancing combat encounters in Dungeons & Dragons (D&D) is a complex task that requires Dungeon Masters (DM) to manually assess party strength, enemy composition, and dynamic player interactions while avoiding interruption of the narrative flow. In this paper, we propose Encounter Generation via Reinforcement Learning (NTRL), a novel approach that automates Dynamic Difficulty Adjustment (DDA) in D&D via combat encounter design. By framing the problem as a contextual bandit, NTRL generates encounters based on real-time party members attributes. In comparison with classic DM heuristics, NTRL iteratively optimizes encounters to extend combat longevity (+200%), increases damage dealt to party members, reducing post-combat hit points (-16.67%), and raises the number of player deaths while maintaining low total party kills (TPK). The intensification of combat forces players to act wisely and engage in tactical maneuvers, even though the generated encounters guarantee high win rates (70%). Even in comparison with encounters designed by human Dungeon Masters, NTRL demonstrates superior performance by enhancing the strategic depth of combat while increasing difficulty in a manner that preserves overall game fairness.
Related papers
- Personalized Dynamic Difficulty Adjustment -- Imitation Learning Meets Reinforcement Learning [44.99833362998488]
In this work, we explore balancing game difficulty using machine learning-based agents to challenge players based on their current behavior.
This is achieved by a combination of two agents, in which one learns to imitate the player, while the second is trained to beat the first.
In our demo, we investigate the proposed framework for personalized dynamic difficulty adjustment of AI agents in the context of the fighting game AI competition.
arXiv Detail & Related papers (2024-08-13T11:24:12Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Continuous Reinforcement Learning-based Dynamic Difficulty Adjustment in
a Visual Working Memory Game [5.857929080874288]
Reinforcement Learning (RL) methods have been employed for Dynamic Difficulty Adjustment (DDA) in non-competitive games.
We propose a continuous RL-based DDA methodology for a visual working memory (VWM) game to handle the complex search space for the difficulty of memorization.
arXiv Detail & Related papers (2023-08-24T12:05:46Z) - CALYPSO: LLMs as Dungeon Masters' Assistants [46.61924662589895]
Large language models (LLMs) have shown remarkable abilities to generate coherent natural language text.
We introduce CALYPSO, a system of LLM-powered interfaces that support DMs with information and inspiration specific to their own scenario.
When given access to CALYPSO, DMs reported that it generated high-fidelity text suitable for direct presentation to players, and low-fidelity ideas that the DM could develop further while maintaining their creative agency.
arXiv Detail & Related papers (2023-08-15T02:57:00Z) - I Cast Detect Thoughts: Learning to Converse and Guide with Intents and
Theory-of-Mind in Dungeons and Dragons [82.28503603235364]
We study teacher-student natural language interactions in a goal-driven environment in Dungeons and Dragons.
Our approach is to decompose and model these interactions into (1) the Dungeon Master's intent to guide players toward a given goal; (2) the DM's guidance utterance to the players expressing this intent; and (3) a theory-of-mind (ToM) model that anticipates the players' reaction to the guidance one turn into the future.
arXiv Detail & Related papers (2022-12-20T08:06:55Z) - Dungeons and Dragons as a Dialog Challenge for Artificial Intelligence [28.558934742150022]
We frame D&D as a dialogue system challenge, where the tasks are to both generate the next conversational turn in the game and predict the state of the game given the dialogue history.
We create a gameplay dataset consisting of nearly 900 games, with a total of 7,000 players, 800,000 dialogue turns, 500,000 dice rolls, and 58 million words.
We train a large language model (LM) to generate the next game turn, conditioning it on different information.
arXiv Detail & Related papers (2022-10-13T15:43:39Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.