Combining Reinforcement Learning and Behavior Trees for NPCs in Video Games with AMD Schola
- URL: http://arxiv.org/abs/2510.14154v1
- Date: Wed, 15 Oct 2025 23:00:48 GMT
- Title: Combining Reinforcement Learning and Behavior Trees for NPCs in Video Games with AMD Schola
- Authors: Tian Liu, Alex Cann, Ian Colbert, Mehdi Saeedi,
- Abstract summary: We outline challenges the Game AI community faces when using RL-driven NPCs in practice.<n>We highlight the intersection of RL with traditional behavior trees (BTs) as a crucial juncture to be explored further.
- Score: 2.011248169400339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the rapid advancements in the reinforcement learning (RL) research community have been remarkable, the adoption in commercial video games remains slow. In this paper, we outline common challenges the Game AI community faces when using RL-driven NPCs in practice, and highlight the intersection of RL with traditional behavior trees (BTs) as a crucial juncture to be explored further. Although the BT+RL intersection has been suggested in several research papers, its adoption is rare. We demonstrate the viability of this approach using AMD Schola -- a plugin for training RL agents in Unreal Engine -- by creating multi-task NPCs in a complex 3D environment inspired by the commercial video game ``The Last of Us". We provide detailed methodologies for jointly training RL models with BTs while showcasing various skills.
Related papers
- Human-Like Goalkeeping in a Realistic Football Simulation: a Sample-Efficient Reinforcement Learning Approach [35.515515697546554]
This paper proposes a sample-efficient Deep Reinforcement Learning (DRL) method tailored for training and fine-tuning agents in industrial settings.<n>We evaluate our method training a goalkeeper agent in EA SPORTS FC 25, one of the best-selling football simulations today.<n>Our agent outperforms the game's built-in AI by 10% in ball saving rate.
arXiv Detail & Related papers (2025-10-27T11:06:00Z) - Game-RL: Synthesizing Multimodal Verifiable Game Data to Boost VLMs' General Reasoning [89.93384726755106]
Vision-language reinforcement learning (RL) has primarily focused on narrow domains.<n>We find video games inherently provide rich visual elements and mechanics that are easy to verify.<n>To fully use the multimodal and verifiable reward in video games, we propose Game-RL.
arXiv Detail & Related papers (2025-05-20T03:47:44Z) - RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning [125.96848846966087]
Training large language models (LLMs) as interactive agents presents unique challenges.<n>While reinforcement learning has enabled progress in static tasks, multi-turn agent RL training remains underexplored.<n>We propose StarPO, a general framework for trajectory-level agent RL, and introduce RAGEN, a modular system for training and evaluating LLM agents.
arXiv Detail & Related papers (2025-04-24T17:57:08Z) - Reinforcing Competitive Multi-Agents for Playing 'So Long Sucker' [0.12234742322758417]
This paper investigates the strategy game So Long Sucker (SLS) as a novel benchmark for multi-agent reinforcement learning (MARL)<n>We introduce the first publicly available computational framework for SLS, complete with a graphical user interface and benchmarking support for reinforcement learning algorithms.
arXiv Detail & Related papers (2024-11-17T12:38:13Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - Dialogue Shaping: Empowering Agents through NPC Interaction [11.847150109599982]
Non-player characters (NPCs) sometimes hold some key information about the game, which can potentially help to train RL agents faster.
This paper explores how to interact and converse with NPC agents to get the key information using large language models (LLMs)
arXiv Detail & Related papers (2023-07-28T22:44:54Z) - Architecting and Visualizing Deep Reinforcement Learning Models [77.34726150561087]
Deep Reinforcement Learning (DRL) is a theory that aims to teach computers how to communicate with each other.
In this paper, we present a new Atari Pong game environment, a policy gradient based DRL model, a real-time network visualization, and an interactive display to help build intuition and awareness of the mechanics of DRL inference.
arXiv Detail & Related papers (2021-12-02T17:48:26Z) - How to Train Your Robot with Deep Reinforcement Learning; Lessons We've
Learned [111.06812202454364]
We present a number of case studies involving robotic deep RL.
We discuss commonly perceived challenges in deep RL and how they have been addressed in these works.
We also provide an overview of other outstanding challenges, many of which are unique to the real-world robotics setting.
arXiv Detail & Related papers (2021-02-04T22:09:28Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.