Visual-information-driven model for crowd simulation using temporal convolutional network
- URL: http://arxiv.org/abs/2311.02996v2
- Date: Tue, 9 Apr 2024 09:22:30 GMT
- Title: Visual-information-driven model for crowd simulation using temporal convolutional network
- Authors: Xuanwen Liang, Eric Wai Ming Lee,
- Abstract summary: This paper proposes a novel visual-information-driven (VID) crowd simulation model.
The VID model predicts the pedestrian velocity at the next time step based on the prior social-visual information and motion data of an individual.
A radar-geometry-locomotion method is established to extract the visual information of pedestrians.
A temporal convolutional network (TCN)-based deep learning model, named social-visual TCN, is developed for velocity prediction.
- Score: 1.712689361909955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Crowd simulations play a pivotal role in building design, influencing both user experience and public safety. While traditional knowledge-driven models have their merits, data-driven crowd simulation models promise to bring a new dimension of realism to these simulations. However, most of the existing data-driven models are designed for specific geometries, leading to poor adaptability and applicability. A promising strategy for enhancing the adaptability and realism of data-driven crowd simulation models is to incorporate visual information, including the scenario geometry and pedestrian locomotion. Consequently, this paper proposes a novel visual-information-driven (VID) crowd simulation model. The VID model predicts the pedestrian velocity at the next time step based on the prior social-visual information and motion data of an individual. A radar-geometry-locomotion method is established to extract the visual information of pedestrians. Moreover, a temporal convolutional network (TCN)-based deep learning model, named social-visual TCN, is developed for velocity prediction. The VID model is tested on three public pedestrian motion datasets with distinct geometries, i.e., corridor, corner, and T-junction. Both qualitative and quantitative metrics are employed to evaluate the VID model, and the results highlight the improved adaptability of the model across all three geometric scenarios. Overall, the proposed method demonstrates effectiveness in enhancing the adaptability of data-driven crowd models.
Related papers
- A Data-driven Crowd Simulation Framework Integrating Physics-informed Machine Learning with Navigation Potential Fields [15.429885272765363]
We propose a novel data-driven crowd simulation framework that integrates Physics-informed Machine Learning (PIML) with navigation potential fields.
Specifically, we design an innovative Physics-informed S-temporal Graph Convolutional Network (PI-STGCN) as a data-driven module to predict pedestrian movement trends.
In our framework, navigation potential fields are dynamically computed and updated based on the movement trends predicted by the PI-STGCN.
arXiv Detail & Related papers (2024-10-21T15:56:17Z) - Spatiotemporal Implicit Neural Representation as a Generalized Traffic Data Learner [46.866240648471894]
Spatiotemporal Traffic Data (STTD) measures the complex dynamical behaviors of the multiscale transportation system.
We present a novel paradigm to address the STTD learning problem by parameterizing STTD as an implicit neural representation.
We validate its effectiveness through extensive experiments in real-world scenarios, showcasing applications from corridor to network scales.
arXiv Detail & Related papers (2024-05-06T06:23:06Z) - Bridging the Sim-to-Real Gap with Bayesian Inference [53.61496586090384]
We present SIM-FSVGD for learning robot dynamics from data.
We use low-fidelity physical priors to regularize the training of neural network models.
We demonstrate the effectiveness of SIM-FSVGD in bridging the sim-to-real gap on a high-performance RC racecar system.
arXiv Detail & Related papers (2024-03-25T11:29:32Z) - Trajeglish: Traffic Modeling as Next-Token Prediction [67.28197954427638]
A longstanding challenge for self-driving development is simulating dynamic driving scenarios seeded from recorded driving logs.
We apply tools from discrete sequence modeling to model how vehicles, pedestrians and cyclists interact in driving scenarios.
Our model tops the Sim Agents Benchmark, surpassing prior work along the realism meta metric by 3.3% and along the interaction metric by 9.9%.
arXiv Detail & Related papers (2023-12-07T18:53:27Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Reinforcement Learning with Human Feedback for Realistic Traffic
Simulation [53.85002640149283]
Key element of effective simulation is the incorporation of realistic traffic models that align with human knowledge.
This study identifies two main challenges: capturing the nuances of human preferences on realism and the unification of diverse traffic simulation models.
arXiv Detail & Related papers (2023-09-01T19:29:53Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Imagining The Road Ahead: Multi-Agent Trajectory Prediction via
Differentiable Simulation [17.953880589741438]
We develop a deep generative model built on a fully differentiable simulator for trajectory prediction.
We achieve state-of-the-art results on the INTERACTION dataset, using standard neural architectures and a standard variational training objective.
We name our model ITRA, for "Imagining the Road Ahead"
arXiv Detail & Related papers (2021-04-22T17:48:08Z) - Pedestrian Trajectory Prediction with Convolutional Neural Networks [0.3787359747190393]
We propose a new approach to pedestrian trajectory prediction, with the introduction of a novel 2D convolutional model.
This new model outperforms recurrent models, and it achieves state-of-the-art results on the ETH and TrajNet datasets.
We also present an effective system to represent pedestrian positions and powerful data augmentation techniques.
arXiv Detail & Related papers (2020-10-12T15:51:01Z) - Wind speed prediction using multidimensional convolutional neural
networks [5.228711636020665]
This paper introduces a model based on convolutional neural networks (CNNs) for wind speed prediction tasks.
We show that compared to classical CNN-based models, the proposed model is able to better characterise the wind data.
arXiv Detail & Related papers (2020-07-04T20:48:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.