Identification of Play Styles in Universal Fighting Engine
- URL: http://arxiv.org/abs/2108.03599v1
- Date: Sun, 8 Aug 2021 10:06:16 GMT
- Title: Identification of Play Styles in Universal Fighting Engine
- Authors: Kaori Yuda, Shota Kamei, Riku Tanji, Ryoya Ito, Ippo Wakana and Maxim
Mozgovoy
- Abstract summary: We show how an automated procedure can be used to compare play styles of individual AI- and human-controlled characters.
We also show how it can be used to assess human-likeness and diversity of game participants.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI-controlled characters in fighting games are expected to possess reasonably
high skills and behave in a believable, human-like manner, exhibiting a
diversity of play styles and strategies. Thus, the development of fighting game
AI requires the ability to evaluate these properties. For instance, it should
be possible to ensure that the characters created are believable and diverse.
In this paper, we show how an automated procedure can be used to compare play
styles of individual AI- and human-controlled characters, and to assess
human-likeness and diversity of game participants.
Related papers
- Maia-2: A Unified Model for Human-AI Alignment in Chess [10.577896749797485]
We propose a unified modeling approach for human-AI alignment in chess.
We introduce a skill-aware attention mechanism to dynamically integrate players' strengths with encoded chess positions.
Our results pave the way for deeper insights into human decision-making and AI-guided teaching tools.
arXiv Detail & Related papers (2024-09-30T17:54:23Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Toward Human-AI Alignment in Large-Scale Multi-Player Games [24.784173202415687]
We analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games)
We find that while human players exhibit variability in fight-flight and explore-exploit behavior, AI players tend towards uniformity.
These stark differences underscore the need for interpretable evaluation, design, and integration of AI in human-aligned applications.
arXiv Detail & Related papers (2024-02-05T22:55:33Z) - C$\cdot$ASE: Learning Conditional Adversarial Skill Embeddings for
Physics-based Characters [49.83342243500835]
We present C$cdot$ASE, an efficient framework that learns conditional Adversarial Skill Embeddings for physics-based characters.
C$cdot$ASE divides the heterogeneous skill motions into distinct subsets containing homogeneous samples for training a low-level conditional model.
The skill-conditioned imitation learning naturally offers explicit control over the character's skills after training.
arXiv Detail & Related papers (2023-09-20T14:34:45Z) - Diversity-based Deep Reinforcement Learning Towards Multidimensional
Difficulty for Fighting Game AI [0.9645196221785693]
We introduce a diversity-based deep reinforcement learning approach for generating a set of agents of similar difficulty.
We find this approach outperforms a baseline trained with specialized, human-authored reward functions in both diversity and performance.
arXiv Detail & Related papers (2022-11-04T21:49:52Z) - Detecting Individual Decision-Making Style: Exploring Behavioral
Stylometry in Chess [4.793072503820555]
We present a transformer-based approach to behavioral stylometry in the context of chess.
Our method operates in a few-shot classification framework, and can correctly identify a player from among thousands of candidate players.
We consider more broadly what our resulting embeddings reveal about human style in chess, as well as the potential ethical implications.
arXiv Detail & Related papers (2022-08-02T11:18:16Z) - Pick Your Battles: Interaction Graphs as Population-Level Objectives for
Strategic Diversity [49.68758494467258]
We study how to construct diverse populations of agents by carefully structuring how individuals within a population interact.
Our approach is based on interaction graphs, which control the flow of information between agents during training.
We provide evidence for the importance of diversity in multi-agent training and analyse the effect of applying different interaction graphs on the training trajectories, diversity and performance of populations in a range of games.
arXiv Detail & Related papers (2021-10-08T11:29:52Z) - Policy Fusion for Adaptive and Customizable Reinforcement Learning
Agents [137.86426963572214]
We show how to combine distinct behavioral policies to obtain a meaningful "fusion" policy.
We propose four different policy fusion methods for combining pre-trained policies.
We provide several practical examples and use-cases for how these methods are indeed useful for video game production and designers.
arXiv Detail & Related papers (2021-04-21T16:08:44Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Learning Models of Individual Behavior in Chess [4.793072503820555]
We develop highly accurate predictive models of individual human behavior in chess.
Our work demonstrates a way to bring AI systems into better alignment with the behavior of individual people.
arXiv Detail & Related papers (2020-08-23T18:24:21Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.