R3ST: A Synthetic 3D Dataset With Realistic Trajectories
- URL: http://arxiv.org/abs/2512.16784v1
- Date: Thu, 18 Dec 2025 17:18:45 GMT
- Title: R3ST: A Synthetic 3D Dataset With Realistic Trajectories
- Authors: Simone Teglia, Claudia Melis Tonti, Francesco Pro, Leonardo Russo, Andrea Alfarano, Leonardo Pentassuglia, Irene Amerini,
- Abstract summary: We introduce R3ST (Realistic 3D Synthetic Trajectories), a synthetic dataset that overcomes the lack of realistic vehicle motion.<n>The proposed dataset closes the gap between synthetic data and realistic trajectories.
- Score: 7.7401133539188365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Datasets are essential to train and evaluate computer vision models used for traffic analysis and to enhance road safety. Existing real datasets fit real-world scenarios, capturing authentic road object behaviors, however, they typically lack precise ground-truth annotations. In contrast, synthetic datasets play a crucial role, allowing for the annotation of a large number of frames without additional costs or extra time. However, a general drawback of synthetic datasets is the lack of realistic vehicle motion, since trajectories are generated using AI models or rule-based systems. In this work, we introduce R3ST (Realistic 3D Synthetic Trajectories), a synthetic dataset that overcomes this limitation by generating a synthetic 3D environment and integrating real-world trajectories derived from SinD, a bird's-eye-view dataset recorded from drone footage. The proposed dataset closes the gap between synthetic data and realistic trajectories, advancing the research in trajectory forecasting of road vehicles, offering both accurate multimodal ground-truth annotations and authentic human-driven vehicle trajectories.
Related papers
- UrbanTwin: Synthetic LiDAR Datasets (LUMPI, V2X-Real-IC, and TUMTraf-I) [3.1508266388327324]
UrbanTwin datasets are high-fidelity, realistic replicas of three public roadside lidar datasets.<n>Each UrbanTwin dataset contains 10K frames corresponding to one of the public datasets.
arXiv Detail & Related papers (2025-09-08T15:06:02Z) - Towards Railway Domain Adaptation for LiDAR-based 3D Detection: Road-to-Rail and Sim-to-Real via SynDRA-BBox [3.3810628880631226]
We introduce SynDRA-BBox, a synthetic dataset designed to support object detection and other vision-based tasks in realistic railway scenarios.<n>To the best of our knowledge, is the first synthetic dataset specifically tailored for 2D and 3D object detection in the railway domain.<n>A state-of-the-art semi-supervised domain adaptation method is adapted to the railway context, enabling the transferability of synthetic data to 3D object detection.
arXiv Detail & Related papers (2025-07-22T10:04:49Z) - R3D2: Realistic 3D Asset Insertion via Diffusion for Autonomous Driving Simulation [78.26308457952636]
This paper introduces R3D2, a lightweight, one-step diffusion model designed to overcome limitations in autonomous driving simulation.<n>It enables realistic insertion of complete 3D assets into existing scenes by generating plausible rendering effects-such as shadows and consistent lighting-in real time.<n>We show that R3D2 significantly enhances the realism of inserted assets, enabling use-cases like text-to-3D asset insertion and cross-scene/dataset object transfer.
arXiv Detail & Related papers (2025-06-09T14:50:19Z) - Unraveling the Effects of Synthetic Data on End-to-End Autonomous Driving [35.49042205415498]
We introduce SceneCrafter, a realistic, interactive, and efficient autonomous driving simulator based on 3D Gaussian Splatting (3DGS)<n>SceneCrafter efficiently generates realistic driving logs across diverse traffic scenarios.<n>It also enables robust closed-loop evaluation of end-to-end models.
arXiv Detail & Related papers (2025-03-23T15:27:43Z) - Drive-1-to-3: Enriching Diffusion Priors for Novel View Synthesis of Real Vehicles [81.29018359825872]
This paper consolidates a set of good practices to finetune large pretrained models for a real-world task.<n>Specifically, we develop several strategies to account for discrepancies between the synthetic data and real driving data.<n>Our insights lead to effective finetuning that results in a $68.8%$ reduction in FID for novel view synthesis over prior arts.
arXiv Detail & Related papers (2024-12-19T03:39:13Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Augmented Reality based Simulated Data (ARSim) with multi-view consistency for AV perception networks [47.07188762367792]
We present ARSim, a framework designed to enhance real multi-view image data with 3D synthetic objects of interest.
We construct a simplified virtual scene using real data and strategically place 3D synthetic assets within it.
The resulting augmented multi-view consistent dataset is used to train a multi-camera perception network for autonomous vehicles.
arXiv Detail & Related papers (2024-03-22T17:49:11Z) - Unsupervised Traffic Scene Generation with Synthetic 3D Scene Graphs [83.9783063609389]
We propose a method based on domain-invariant scene representation to directly synthesize traffic scene imagery without rendering.
Specifically, we rely on synthetic scene graphs as our internal representation and introduce an unsupervised neural network architecture for realistic traffic scene synthesis.
arXiv Detail & Related papers (2023-03-15T09:26:29Z) - Hands-Up: Leveraging Synthetic Data for Hands-On-Wheel Detection [0.38233569758620045]
This work demonstrates the use of synthetic photo-realistic in-cabin data to train a Driver Monitoring System.
We show how performing error analysis and generating the missing edge-cases in our platform boosts performance.
This showcases the ability of human-centric synthetic data to generalize well to the real world.
arXiv Detail & Related papers (2022-05-31T23:34:12Z) - Recovering and Simulating Pedestrians in the Wild [81.38135735146015]
We propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around.
We incorporate the reconstructed pedestrian assets bank in a realistic 3D simulation system.
We show that the simulated LiDAR data can be used to significantly reduce the amount of real-world data required for visual perception tasks.
arXiv Detail & Related papers (2020-11-16T17:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.