Feint Behaviors and Strategies: Formalization, Implementation and Evaluation
- URL: http://arxiv.org/abs/2403.07932v2
- Date: Sat, 07 Jun 2025 16:22:33 GMT
- Title: Feint Behaviors and Strategies: Formalization, Implementation and Evaluation
- Authors: Junyu Liu, Xiangjun Peng,
- Abstract summary: Feint behaviors are crucial tactics in most competitive multi-player games.<n>We introduce the first comprehensive formalization of Feint behaviors at both action-level and strategy-level.<n>We provide concrete implementation and quantitative evaluation of them in multi-player games.
- Score: 6.61661097573508
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Feint behaviors refer to a set of deceptive behaviors in a nuanced manner, which enable players to obtain temporal and spatial advantages over opponents in competitive games. Such behaviors are crucial tactics in most competitive multi-player games (e.g., boxing, fencing, basketball, motor racing, etc.). However, existing literature does not provide a comprehensive (and/or concrete) formalization for Feint behaviors, and their implications on game strategies. In this work, we introduce the first comprehensive formalization of Feint behaviors at both action-level and strategy-level, and provide concrete implementation and quantitative evaluation of them in multi-player games. The key idea of our work is to (1) allow automatic generation of Feint behaviors via Palindrome-directed templates, combine them into meaningful behavior sequences via a Dual-Behavior Model; (2) concertize the implications from our formalization of Feint on game strategies, in terms of temporal, spatial and their collective impacts respectively; and (3) provide a unified implementation scheme of Feint behaviors in existing MARL frameworks. The experimental results show that our design of Feint behaviors can (1) greatly improve the game reward gains; (2) significantly improve the diversity of Multi-Player Games; and (3) only incur negligible overheads in terms of time consumption.
Related papers
- Enhancing Player Enjoyment with a Two-Tier DRL and LLM-Based Agent System for Fighting Games [41.463376100442396]
We propose a two-tier agent system and conduct experiments in the classic fighting game Street Fighter II.
The first tier of TTA employs a task-oriented network architecture, modularized reward functions, and hybrid training to produce diverse and skilled DRL agents.
In the second tier of TTA, a Large Language Model Hyper-Agent, leveraging players' playing data and feedback, dynamically selects suitable DRL opponents.
arXiv Detail & Related papers (2025-04-10T03:38:06Z) - Model as a Game: On Numerical and Spatial Consistency for Generative Games [117.36098212829766]
We revisit the paradigm of generative games to explore what truly constitutes a Model as a Game (MaaG) with a well-developed mechanism.
Based on the DiT architecture, we design two specialized modules: (1) a numerical module that integrates a LogicNet to determine event triggers, with calculations processed externally as conditions for image generation; and (2) a spatial module that maintains a map of explored areas, retrieving location-specific information during generation and linking new observations to ensure continuity.
arXiv Detail & Related papers (2025-03-27T05:46:15Z) - Ranking Joint Policies in Dynamic Games using Evolutionary Dynamics [0.0]
It has been shown that the dynamics of agents' interactions, even in simple two-player games, are incapable of reaching Nash equilibria.<n>Our goal is to identify agents' joint strategies that result in stable behavior, being resistant to changes, while also accounting for agents' payoffs.
arXiv Detail & Related papers (2025-02-20T16:50:38Z) - player2vec: A Language Modeling Approach to Understand Player Behavior in Games [2.2216044069240657]
Methods for learning latent user representations from historical behavior logs have gained traction for recommendation tasks in e-commerce, content streaming, and other settings.
We present a novel method for overcoming this limitation by extending a long-range Transformer model to player behavior data.
We discuss specifics of behavior tracking in games and propose preprocessing and tokenization approaches by viewing in-game events in an analogous way to words in sentences.
arXiv Detail & Related papers (2024-04-05T17:29:47Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - Offline Imitation of Badminton Player Behavior via Experiential Contexts and Brownian Motion [19.215240805688836]
RallyNet is a hierarchical offline imitation learning model for badminton player behaviors.
We extensively validate RallyNet with the largest available real-world badminton dataset.
Results reveal RallyNet's superiority over offline imitation learning methods and state-of-the-art turn-based approaches.
arXiv Detail & Related papers (2024-03-19T03:34:23Z) - Reward Shaping for Improved Learning in Real-time Strategy Game Play [0.3347089492811693]
We show that appropriately designed reward shaping functions can significantly improve the player's performance.
We have validated our reward shaping functions within a simulated environment for playing a marine capture-the-flag game.
arXiv Detail & Related papers (2023-11-27T21:56:18Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Generating Personas for Games with Multimodal Adversarial Imitation
Learning [47.70823327747952]
Reinforcement learning has been widely successful in producing agents capable of playing games at a human level.
Going beyond reinforcement learning is necessary to model a wide range of human playstyles.
This paper presents a novel imitation learning approach to generate multiple persona policies for playtesting.
arXiv Detail & Related papers (2023-08-15T06:58:19Z) - Beyond the Meta: Leveraging Game Design Parameters for Patch-Agnostic
Esport Analytics [4.1692797498685685]
Esport games comprise a sizeable fraction of the global games market, and is the fastest growing segment in games.
Compared to traditional sports, esport titles change rapidly, in terms of mechanics as well as rules.
This paper extracts information from game design (i.e. patch notes) and uses clustering techniques to propose a new form of character representation.
arXiv Detail & Related papers (2023-05-29T11:05:20Z) - On the Convergence of No-Regret Learning Dynamics in Time-Varying Games [89.96815099996132]
We characterize the convergence of optimistic gradient descent (OGD) in time-varying games.
Our framework yields sharp convergence bounds for the equilibrium gap of OGD in zero-sum games.
We also provide new insights on dynamic regret guarantees in static games.
arXiv Detail & Related papers (2023-01-26T17:25:45Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Where Will Players Move Next? Dynamic Graphs and Hierarchical Fusion for
Movement Forecasting in Badminton [6.2405734957622245]
We focus on predicting what types of returning strokes will be made, and where players will move to based on previous strokes.
Existing sequence-based models neglect the effects of interactions between players, and graph-based models still suffer from multifaceted perspectives.
We propose a novel Dynamic Graphs and Hierarchical Fusion for Movement Forecasting model (DyMF) with interaction style extractors.
arXiv Detail & Related papers (2022-11-22T12:21:24Z) - Off-Beat Multi-Agent Reinforcement Learning [62.833358249873704]
We investigate model-free multi-agent reinforcement learning (MARL) in environments where off-beat actions are prevalent.
We propose a novel episodic memory, LeGEM, for model-free MARL algorithms.
We evaluate LeGEM on various multi-agent scenarios with off-beat actions, including Stag-Hunter Game, Quarry Game, Afforestation Game, and StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2022-05-27T02:21:04Z) - TiKick: Toward Playing Multi-agent Football Full Games from Single-agent
Demonstrations [31.596018856092513]
Tikick is the first learning-based AI system that can take over the multi-agent Google Research Football full game.
To the best of our knowledge, Tikick is the first learning-based AI system that can take over the multi-agent Google Research Football full game.
arXiv Detail & Related papers (2021-10-09T08:34:58Z) - Pick Your Battles: Interaction Graphs as Population-Level Objectives for
Strategic Diversity [49.68758494467258]
We study how to construct diverse populations of agents by carefully structuring how individuals within a population interact.
Our approach is based on interaction graphs, which control the flow of information between agents during training.
We provide evidence for the importance of diversity in multi-agent training and analyse the effect of applying different interaction graphs on the training trajectories, diversity and performance of populations in a range of games.
arXiv Detail & Related papers (2021-10-08T11:29:52Z) - Predicting the outcome of team movements -- Player time series analysis
using fuzzy and deep methods for representation learning [0.0]
We provide a framework for the useful encoding of short tactics and space occupations in a more extended sequence of movements or tactical plans.
We discuss the effectiveness of the proposed approach for prediction and recognition tasks on the professional basketball SportVU dataset for the 2015-16 half-season.
arXiv Detail & Related papers (2021-09-13T18:42:37Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - baller2vec: A Multi-Entity Transformer For Multi-Agent Spatiotemporal
Modeling [17.352818121007576]
Multi-agenttemporal modeling is a challenging task from both an algorithmic design perspective and computational perspective.
We introduce baller2vec, a multi-entity generalization of the standard Transformer that can simultaneously integrate information across entities and time.
We test the effectiveness of baller2vec for multi-agenttemporal modeling by training it to perform two different basketball-related tasks.
arXiv Detail & Related papers (2021-02-05T17:02:04Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.