Development of a Realistic Crowd Simulation Environment for Fine-grained
Validation of People Tracking Methods
- URL: http://arxiv.org/abs/2304.13403v1
- Date: Wed, 26 Apr 2023 09:29:58 GMT
- Title: Development of a Realistic Crowd Simulation Environment for Fine-grained
Validation of People Tracking Methods
- Authors: Pawe{\l} Foszner, Agnieszka Szcz\k{e}sna, Luca Ciampi, Nicola Messina,
Adam Cygan, Bartosz Bizo\'n, Micha{\l} Cogiel, Dominik Golba, El\.zbieta
Macioszek, Micha{\l} Staniszewski
- Abstract summary: This work develops an extension of crowd simulation (named CrowdSim2) and prove its usability in the application of people-tracking algorithms.
The simulator is developed using the very popular Unity 3D engine with particular emphasis on the aspects of realism in the environment.
Three methods of tracking were used to validate generated dataset: IOU-Tracker, Deep-Sort, and Deep-TAMA.
- Score: 0.7223361655030193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generally, crowd datasets can be collected or generated from real or
synthetic sources. Real data is generated by using infrastructure-based sensors
(such as static cameras or other sensors). The use of simulation tools can
significantly reduce the time required to generate scenario-specific crowd
datasets, facilitate data-driven research, and next build functional machine
learning models. The main goal of this work was to develop an extension of
crowd simulation (named CrowdSim2) and prove its usability in the application
of people-tracking algorithms. The simulator is developed using the very
popular Unity 3D engine with particular emphasis on the aspects of realism in
the environment, weather conditions, traffic, and the movement and models of
individual agents. Finally, three methods of tracking were used to validate
generated dataset: IOU-Tracker, Deep-Sort, and Deep-TAMA.
Related papers
- CARLA2Real: a tool for reducing the sim2real gap in CARLA simulator [2.8978140690127328]
We employ a state-of-the-art approach to enhance the photorealism of simulated data, aligning them with the visual characteristics of real-world datasets.
Based on this, we developed CARLA2Real, an easy-to-use, publicly available tool (plug-in) for the widely used and open-source CARLA simulator.
This tool enhances the output of CARLA in near real-time, achieving a frame rate of 13 FPS, translating it to the visual style and realism of real-world datasets.
arXiv Detail & Related papers (2024-10-23T19:33:30Z) - A Unified Simulation Framework for Visual and Behavioral Fidelity in
Crowd Analysis [6.460475042590685]
We present a human crowd simulator, called UniCrowd, and its associated validation pipeline.
We show how the simulator can generate annotated data, suitable for computer vision tasks, in particular for detection and segmentation, as well as the related applications, as crowd counting, human pose estimation, trajectory analysis and prediction, and anomaly detection.
arXiv Detail & Related papers (2023-12-05T09:43:27Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Learning from synthetic data generated with GRADE [0.6982738885923204]
We present a framework for generating realistic animated dynamic environments (GRADE) for robotics research.
GRADE supports full simulation control, ROS integration, realistic physics, while being in an engine that produces high visual fidelity images and ground truth data.
We show that, even training using only synthetic data, can generalize well to real-world images in the same application domain.
arXiv Detail & Related papers (2023-05-07T14:13:04Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Hands-Up: Leveraging Synthetic Data for Hands-On-Wheel Detection [0.38233569758620045]
This work demonstrates the use of synthetic photo-realistic in-cabin data to train a Driver Monitoring System.
We show how performing error analysis and generating the missing edge-cases in our platform boosts performance.
This showcases the ability of human-centric synthetic data to generalize well to the real world.
arXiv Detail & Related papers (2022-05-31T23:34:12Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - PREPRINT: Comparison of deep learning and hand crafted features for
mining simulation data [7.214140640112874]
This paper addresses the task of extracting meaningful results in an automated manner from high dimensional data sets.
We propose deep learning methods which are capable of processing such data and which can be trained to solve relevant tasks on simulation data.
We compile a large dataset of 2D simulations of the flow field around airfoils which contains 16000 flow fields with which we tested and compared approaches.
arXiv Detail & Related papers (2021-03-11T09:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.