Continuous Locomotive Crowd Behavior Generation
- URL: http://arxiv.org/abs/2504.04756v2
- Date: Mon, 21 Apr 2025 11:42:20 GMT
- Title: Continuous Locomotive Crowd Behavior Generation
- Authors: Inhwan Bae, Junoh Lee, Hae-Gon Jeon,
- Abstract summary: We introduce a novel method for automatically generating continuous, realistic crowd trajectories with heterogeneous behaviors and interactions.<n>We demonstrate that our approach effectively models diverse crowd behavior patterns and generalizes well across different geographical environments.
- Score: 23.45902601618188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling and reproducing crowd behaviors are important in various domains including psychology, robotics, transport engineering and virtual environments. Conventional methods have focused on synthesizing momentary scenes, which have difficulty in replicating the continuous nature of real-world crowds. In this paper, we introduce a novel method for automatically generating continuous, realistic crowd trajectories with heterogeneous behaviors and interactions among individuals. We first design a crowd emitter model. To do this, we obtain spatial layouts from single input images, including a segmentation map, appearance map, population density map and population probability, prior to crowd generation. The emitter then continually places individuals on the timeline by assigning independent behavior characteristics such as agents' type, pace, and start/end positions using diffusion models. Next, our crowd simulator produces their long-term locomotions. To simulate diverse actions, it can augment their behaviors based on a Markov chain. As a result, our overall framework populates the scenes with heterogeneous crowd behaviors by alternating between the proposed emitter and simulator. Note that all the components in the proposed framework are user-controllable. Lastly, we propose a benchmark protocol to evaluate the realism and quality of the generated crowds in terms of the scene-level population dynamics and the individual-level trajectory accuracy. We demonstrate that our approach effectively models diverse crowd behavior patterns and generalizes well across different geographical environments. Code is publicly available at https://github.com/InhwanBae/CrowdES .
Related papers
- Gen-C: Populating Virtual Worlds with Generative Crowds [1.5293427903448022]
We introduce Gen-C, a generative model to automate the task of authoring high-level crowd behaviors.<n>Gen-C bypasses the labor-intensive and challenging task of collecting and annotating real crowd video data.<n>We demonstrate the effectiveness of our approach in two scenarios, a University Campus and a Train Station.
arXiv Detail & Related papers (2025-04-02T17:33:53Z) - Whenever, Wherever: Towards Orchestrating Crowd Simulations with Spatio-Temporal Spawn Dynamics [65.72663487116439]
We propose nTPP-GMM that models spawn-temporal spawn dynamics using Neural Temporal Point Processes.<n>We evaluate our approach by simulations of three diverse real-world datasets with nTPP-GMM.
arXiv Detail & Related papers (2025-03-20T18:46:41Z) - Social-Transmotion: Promptable Human Trajectory Prediction [65.80068316170613]
Social-Transmotion is a generic Transformer-based model that exploits diverse and numerous visual cues to predict human behavior.<n>Our approach is validated on multiple datasets, including JTA, JRDB, Pedestrians and Cyclists in Road Traffic, and ETH-UCY.
arXiv Detail & Related papers (2023-12-26T18:56:49Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Spatiotemporal-Augmented Graph Neural Networks for Human Mobility Simulation [35.89805766554052]
We propose a novel framework to model the dynamictemporal effects of locations, namely SRpatio-Augmented gaph neural networks.
The STAR framework designs varioustemporal graphs to capture the behaviors correspondence and builds a novel branch to simulate the varying dwells in locations, which duration is finally optimized in an adversarial manner.
arXiv Detail & Related papers (2023-06-15T11:47:45Z) - Learning signatures of decision making from many individuals playing the
same game [54.33783158658077]
We design a predictive framework that learns representations to encode an individual's 'behavioral style'
We apply our method to a large-scale behavioral dataset from 1,000 humans playing a 3-armed bandit task.
arXiv Detail & Related papers (2023-02-21T21:41:53Z) - JKOnet: Proximal Optimal Transport Modeling of Population Dynamics [69.89192135800143]
We propose a neural architecture that combines an energy model on measures, with (small) optimal displacements solved with input convex neural networks (ICNN)
We demonstrate the applicability of our model to explain and predict population dynamics.
arXiv Detail & Related papers (2021-06-11T12:30:43Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Graph2Kernel Grid-LSTM: A Multi-Cued Model for Pedestrian Trajectory
Prediction by Learning Adaptive Neighborhoods [10.57164270098353]
We present a new perspective to interaction modeling by proposing that pedestrian neighborhoods can become adaptive in design.
Our model outperforms state-of-the-art approaches that collate resembling features over several publicly-tested surveillance videos.
arXiv Detail & Related papers (2020-07-03T19:05:48Z) - CoMoGCN: Coherent Motion Aware Trajectory Prediction with Graph
Representation [12.580809204729583]
We propose a novel framework, coherent motion aware graph convolutional network (CoMoGCN), for trajectory prediction in crowded scenes with group constraints.
Our method achieves state-of-the-art performance on several different trajectory prediction benchmarks, and the best average performance among all benchmarks considered.
arXiv Detail & Related papers (2020-05-02T09:10:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.