An Unsupervised Video Game Playstyle Metric via State Discretization
- URL: http://arxiv.org/abs/2110.00950v1
- Date: Sun, 3 Oct 2021 08:30:51 GMT
- Title: An Unsupervised Video Game Playstyle Metric via State Discretization
- Authors: Chiu-Chou Lin, Wei-Chen Chiu and I-Chen Wu
- Abstract summary: We propose the first metric for video game playstyles directly from the game observations and actions.
Our proposed method is built upon a novel scheme of learning discrete representations.
We demonstrate high playstyle accuracy of our metric in experiments on some video game platforms.
- Score: 20.48689549093258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On playing video games, different players usually have their own playstyles.
Recently, there have been great improvements for the video game AIs on the
playing strength. However, past researches for analyzing the behaviors of
players still used heuristic rules or the behavior features with the
game-environment support, thus being exhausted for the developers to define the
features of discriminating various playstyles. In this paper, we propose the
first metric for video game playstyles directly from the game observations and
actions, without any prior specification on the playstyle in the target game.
Our proposed method is built upon a novel scheme of learning discrete
representations that can map game observations into latent discrete states,
such that playstyles can be exhibited from these discrete states. Namely, we
measure the playstyle distance based on game observations aligned to the same
states. We demonstrate high playstyle accuracy of our metric in experiments on
some video game platforms, including TORCS, RGSK, and seven Atari games, and
for different agents including rule-based AI bots, learning-based AI bots, and
human players.
Related papers
- Instruction-Driven Game Engine: A Poker Case Study [53.689520884467065]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game descriptions and generate game-play processes.
We train the IDGE in a curriculum manner that progressively increases its exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, which not only supports a wide range of poker variants but also allows for highly individualized new poker games through natural language inputs.
arXiv Detail & Related papers (2024-10-17T11:16:27Z) - Perceptual Similarity for Measuring Decision-Making Style and Policy Diversity in Games [28.289135305943056]
Defining and measuring decision-making styles, also known as playstyles, is crucial in gaming.
We introduce three enhancements to increase accuracy: multiscale analysis with varied state psychology, a perceptual kernel rooted in granularity, and the utilization of the intersection-over-union method for efficient evaluation.
Our findings improve the measurement of end-to-end game analysis and the evolution of artificial intelligence for diverse playstyles.
arXiv Detail & Related papers (2024-08-12T10:55:42Z) - People use fast, goal-directed simulation to reason about novel games [75.25089384921557]
We study how people reason about a range of simple but novel connect-n style board games.
We ask people to judge how fair and how fun the games are from very little experience.
arXiv Detail & Related papers (2024-07-19T07:59:04Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Automated Play-Testing Through RL Based Human-Like Play-Styles
Generation [0.0]
Reinforcement Learning is a promising answer to the need of automating video game testing.
We present CARMI: a.
Agent with Relative Metrics as Input.
An agent able to emulate the players play-styles, even on previously unseen levels.
arXiv Detail & Related papers (2022-11-29T14:17:20Z) - Configurable Agent With Reward As Input: A Play-Style Continuum
Generation [0.0]
We present a video game environment which lets us define multiple play-styles.
We then introduce CARI: a Reinforcement Learning agent able to simulate a wide range of play-styles.
arXiv Detail & Related papers (2022-11-29T13:59:25Z) - Are AlphaZero-like Agents Robust to Adversarial Perturbations? [73.13944217915089]
AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin.
We ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions.
We develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space.
arXiv Detail & Related papers (2022-11-07T18:43:25Z) - Generative Personas That Behave and Experience Like Humans [3.611888922173257]
generative AI agents attempt to imitate particular playing behaviors represented as rules, rewards, or human demonstrations.
We extend the notion of behavioral procedural personas to cater for player experience, thus examining generative agents that can both behave and experience their game as humans would.
Our findings suggest that the generated agents exhibit distinctive play styles and experience responses of the human personas they were designed to imitate.
arXiv Detail & Related papers (2022-08-26T12:04:53Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Benchmarking End-to-End Behavioural Cloning on Video Games [5.863352129133669]
We study the general applicability of behavioural cloning on twelve video games, including six modern video games (published after 2010)
Our results show that these agents cannot match humans in raw performance but do learn basic dynamics and rules.
We also demonstrate how the quality of the data matters, and how recording data from humans is subject to a state-action mismatch, due to human reflexes.
arXiv Detail & Related papers (2020-04-02T13:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.