A Data-Driven Discretized CS:GO Simulation Environment to Facilitate Strategic Multi-Agent Planning Research
- URL: http://arxiv.org/abs/2509.06355v2
- Date: Fri, 19 Sep 2025 22:37:17 GMT
- Title: A Data-Driven Discretized CS:GO Simulation Environment to Facilitate Strategic Multi-Agent Planning Research
- Authors: Yunzhe Wang, Volkan Ustun, Chris McGroarty,
- Abstract summary: We present DECOY, a novel multi-agent simulator that abstracts strategic, long-horizon planning in 3D terrains into high-level discretized simulation.<n>Using Counter-Strike: Global Offensive as a testbed, our framework accurately simulates gameplay using only movement decisions as tactical positioning.
- Score: 1.1765015608581086
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern simulation environments for complex multi-agent interactions must balance high-fidelity detail with computational efficiency. We present DECOY, a novel multi-agent simulator that abstracts strategic, long-horizon planning in 3D terrains into high-level discretized simulation while preserving low-level environmental fidelity. Using Counter-Strike: Global Offensive (CS:GO) as a testbed, our framework accurately simulates gameplay using only movement decisions as tactical positioning -- without explicitly modeling low-level mechanics such as aiming and shooting. Central to our approach is a waypoint system that simplifies and discretizes continuous states and actions, paired with neural predictive and generative models trained on real CS:GO tournament data to reconstruct event outcomes. Extensive evaluations show that replays generated from human data in DECOY closely match those observed in the original game. Our publicly available simulation environment provides a valuable tool for advancing research in strategic multi-agent planning and behavior generation.
Related papers
- SimScale: Learning to Drive via Real-World Simulation at Scale [45.08991279559151]
We introduce a novel and scalable simulation framework capable of synthesizing massive unseen states upon existing driving logs.<n>Our pipeline utilizes advanced neural rendering with a reactive environment to generate high-fidelity multi-view observations.<n>We develop a pseudo-expert trajectory generation mechanism for these newly simulated states to provide action supervision.
arXiv Detail & Related papers (2025-11-28T17:17:38Z) - Autonomous Vehicle Path Planning by Searching With Differentiable Simulation [55.46735086899153]
Planning allows an agent to safely refine its actions before executing them in the real world.<n>In autonomous driving, this is crucial to avoid collisions and navigate in complex, dense traffic scenarios.<n>Here we propose Differentiable Simulation for Search (DSS), a framework that leverages the differentiable simulator Waymax as both a next state predictor and a critic.
arXiv Detail & Related papers (2025-11-14T07:56:34Z) - Whenever, Wherever: Towards Orchestrating Crowd Simulations with Spatio-Temporal Spawn Dynamics [65.72663487116439]
We propose nTPP-GMM that models spawn-temporal spawn dynamics using Neural Temporal Point Processes.<n>We evaluate our approach by simulations of three diverse real-world datasets with nTPP-GMM.
arXiv Detail & Related papers (2025-03-20T18:46:41Z) - A Simulation System Towards Solving Societal-Scale Manipulation [14.799498804818333]
The rise of AI-driven manipulation poses significant risks to societal trust and democratic processes.
Yet, studying these effects in real-world settings at scale is ethically and logistically impractical.
We present a simulation environment designed to address this.
arXiv Detail & Related papers (2024-10-17T03:16:24Z) - Sim-to-Real Transfer of Deep Reinforcement Learning Agents for Online Coverage Path Planning [22.077058792635313]
Coverage path planning is the problem of finding a path that covers the entire free space of a confined area.<n>We investigate the suitability of continuous-space reinforcement learning for this challenging problem.<n>We show that our approach surpasses the performance of both previous RL-based approaches and highly specialized methods.
arXiv Detail & Related papers (2024-06-07T13:24:19Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Dyna-T: Dyna-Q and Upper Confidence Bounds Applied to Trees [0.9137554315375919]
We present a preliminary investigation of a novel algorithm called Dyna-T.
In reinforcement learning (RL) a planning agent has its own representation of the environment as a model.
Experience can be used for learning a better model or improve directly the value function and policy.
arXiv Detail & Related papers (2022-01-12T15:06:30Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.