Mimicking Playstyle by Adapting Parameterized Behavior Trees in RTS
Games
- URL: http://arxiv.org/abs/2111.12144v1
- Date: Tue, 23 Nov 2021 20:36:28 GMT
- Title: Mimicking Playstyle by Adapting Parameterized Behavior Trees in RTS
Games
- Authors: Andrzej Kozik, Tomasz Machalewski, Mariusz Marek, Adrian Ochmann
- Abstract summary: Behavior Trees (BTs) impacted the field of Artificial Intelligence (AI) in games.
BTs forced complexity of handcrafted BTs to became barely-tractable and error-prone.
Recent trends in the field focused on automatic creation of AI-agents.
We present a novel approach to semi-automatic construction of AI-agents, that mimic and generalize given human gameplays.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The discovery of Behavior Trees (BTs) impacted the field of Artificial
Intelligence (AI) in games, by providing flexible and natural representation of
non-player characters (NPCs) logic, manageable by game-designers. Nevertheless,
increased pressure on ever better NPCs AI-agents forced complexity of
handcrafted BTs to became barely-tractable and error-prone. On the other hand,
while many just-launched on-line games suffer from player-shortage, the
existence of AI with a broad-range of capabilities could increase players
retention. Therefore, to handle above challenges, recent trends in the field
focused on automatic creation of AI-agents: from deep- and
reinforcementlearning techniques to combinatorial (constrained) optimization
and evolution of BTs. In this paper, we present a novel approach to
semi-automatic construction of AI-agents, that mimic and generalize given human
gameplays by adapting and tuning of expert-created BT under a developed
similarity metric between source and BT gameplays. To this end, we formulated
mixed discrete-continuous optimization problem, in which topological and
functional changes of the BT are reflected in numerical variables, and
constructed a dedicated hybrid-metaheuristic. The performance of presented
approach was verified experimentally in a prototype real-time strategy game.
Carried out experiments confirmed efficiency and perspectives of presented
approach, which is going to be applied in a commercial game.
Related papers
- Mastering the Digital Art of War: Developing Intelligent Combat Simulation Agents for Wargaming Using Hierarchical Reinforcement Learning [0.0]
dissertation proposes a comprehensive approach, including targeted observation abstractions, multi-model integration, a hybrid AI framework, and an overarching hierarchical reinforcement learning framework.
Our localized observation abstraction using piecewise linear spatial decay simplifies the RL problem, enhancing computational efficiency and demonstrating superior efficacy over traditional global observation methods.
Our hybrid AI framework synergizes RL with scripted agents, leveraging RL for high-level decisions and scripted agents for lower-level tasks, enhancing adaptability, reliability, and performance.
arXiv Detail & Related papers (2024-08-23T18:50:57Z) - Toward Optimal LLM Alignments Using Two-Player Games [86.39338084862324]
In this paper, we investigate alignment through the lens of two-agent games, involving iterative interactions between an adversarial and a defensive agent.
We theoretically demonstrate that this iterative reinforcement learning optimization converges to a Nash Equilibrium for the game induced by the agents.
Experimental results in safety scenarios demonstrate that learning in such a competitive environment not only fully trains agents but also leads to policies with enhanced generalization capabilities for both adversarial and defensive agents.
arXiv Detail & Related papers (2024-06-16T15:24:50Z) - Toward Human-AI Alignment in Large-Scale Multi-Player Games [24.784173202415687]
We analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games)
We find that while human players exhibit variability in fight-flight and explore-exploit behavior, AI players tend towards uniformity.
These stark differences underscore the need for interpretable evaluation, design, and integration of AI in human-aligned applications.
arXiv Detail & Related papers (2024-02-05T22:55:33Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Tachikuma: Understading Complex Interactions with Multi-Character and
Novel Objects by Large Language Models [67.20964015591262]
We introduce a benchmark named Tachikuma, comprising a Multiple character and novel Object based interaction Estimation task and a supporting dataset.
The dataset captures log data from real-time communications during gameplay, providing diverse, grounded, and complex interactions for further explorations.
We present a simple prompting baseline and evaluate its performance, demonstrating its effectiveness in enhancing interaction understanding.
arXiv Detail & Related papers (2023-07-24T07:40:59Z) - Mastering Asymmetrical Multiplayer Game with Multi-Agent
Asymmetric-Evolution Reinforcement Learning [8.628547849796615]
Asymmetrical multiplayer (AMP) game is a popular game genre which involves multiple types of agents competing or collaborating in the game.
It is difficult to train powerful agents that can defeat top human players in AMP games by typical self-play training method because of unbalancing characteristics in their asymmetrical environments.
We propose asymmetric-evolution training (AET), a novel multi-agent reinforcement learning framework that can train multiple kinds of agents simultaneously in AMP game.
arXiv Detail & Related papers (2023-04-20T07:14:32Z) - DIAMBRA Arena: a New Reinforcement Learning Platform for Research and
Experimentation [91.3755431537592]
This work presents DIAMBRA Arena, a new platform for reinforcement learning research and experimentation.
It features a collection of high-quality environments exposing a Python API fully compliant with OpenAI Gym standard.
They are episodic tasks with discrete actions and observations composed by raw pixels plus additional numerical values.
arXiv Detail & Related papers (2022-10-19T14:39:10Z) - Playing a 2D Game Indefinitely using NEAT and Reinforcement Learning [0.0]
The performance of algorithms can be compared by using artificial agents that will behave according to the algorithm in the environment they are put in.
The algorithms that are enforced on the artificial agents are NeuroEvolution of Augmenting Topologies (NEAT) and Reinforcement Learning.
arXiv Detail & Related papers (2022-07-28T15:01:26Z) - On games and simulators as a platform for development of artificial
intelligence for command and control [46.33784995107226]
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios.
The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community.
arXiv Detail & Related papers (2021-10-21T17:39:58Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.