DriveSceneGen: Generating Diverse and Realistic Driving Scenarios from
Scratch
- URL: http://arxiv.org/abs/2309.14685v2
- Date: Wed, 28 Feb 2024 09:31:22 GMT
- Title: DriveSceneGen: Generating Diverse and Realistic Driving Scenarios from
Scratch
- Authors: Shuo Sun, Zekai Gu, Tianchen Sun, Jiawei Sun, Chengran Yuan, Yuhang
Han, Dongen Li, Marcelo H. Ang Jr
- Abstract summary: This work introduces DriveSceneGen, a data-driven driving scenario generation method that learns from the real-world driving dataset.
DriveSceneGen is able to generate novel driving scenarios that align with real-world data distributions with high fidelity and diversity.
To the best of our knowledge, DriveSceneGen is the first method that generates novel driving scenarios involving both static map elements and dynamic traffic participants from scratch.
- Score: 6.919313701949779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realistic and diverse traffic scenarios in large quantities are crucial for
the development and validation of autonomous driving systems. However, owing to
numerous difficulties in the data collection process and the reliance on
intensive annotations, real-world datasets lack sufficient quantity and
diversity to support the increasing demand for data. This work introduces
DriveSceneGen, a data-driven driving scenario generation method that learns
from the real-world driving dataset and generates entire dynamic driving
scenarios from scratch. DriveSceneGen is able to generate novel driving
scenarios that align with real-world data distributions with high fidelity and
diversity. Experimental results on 5k generated scenarios highlight the
generation quality, diversity, and scalability compared to real-world datasets.
To the best of our knowledge, DriveSceneGen is the first method that generates
novel driving scenarios involving both static map elements and dynamic traffic
participants from scratch.
Related papers
- GenDDS: Generating Diverse Driving Video Scenarios with Prompt-to-Video Generative Model [6.144680854063938]
GenDDS is a novel approach for generating driving scenarios for autonomous driving systems.
We employ the KITTI dataset, which includes real-world driving videos, to train the model.
We demonstrate that our model can generate high-quality driving videos that closely replicate the complexity and variability of real-world driving scenarios.
arXiv Detail & Related papers (2024-08-28T15:37:44Z) - DriveDiTFit: Fine-tuning Diffusion Transformers for Autonomous Driving [27.92501884414881]
In autonomous driving, datasets are expected to cover various driving scenarios with adverse weather, lighting conditions and diverse moving objects.
We propose DriveDiTFit, a novel method for efficiently generating autonomous Driving data by Fine-tuning pre-trained Diffusion Transformers (DiTs)
Specifically, DriveDiTFit utilizes a gap-driven modulation technique to carefully select and efficiently fine-tune a few parameters in DiTs according to the discrepancy between the pre-trained source data and the target driving data.
arXiv Detail & Related papers (2024-07-22T14:18:52Z) - SimGen: Simulator-conditioned Driving Scene Generation [50.03358485083602]
We introduce a simulator-conditioned scene generation framework called SimGen.
SimGen learns to generate diverse driving scenes by mixing data from the simulator and the real world.
It achieves superior generation quality and diversity while preserving controllability based on the text prompt and the layout pulled from a simulator.
arXiv Detail & Related papers (2024-06-13T17:58:32Z) - DragTraffic: Interactive and Controllable Traffic Scene Generation for Autonomous Driving [10.90477019946728]
DragTraffic is a general, interactive, and controllable traffic scene generation framework based on conditional diffusion.
We employ a regression model to provide a general initial solution and a refinement process based on the conditional diffusion model to ensure diversity.
Experiments on a real-world driving dataset show that DragTraffic outperforms existing methods in terms of authenticity, diversity, and freedom.
arXiv Detail & Related papers (2024-04-19T04:49:28Z) - GenAD: Generalized Predictive Model for Autonomous Driving [75.39517472462089]
We introduce the first large-scale video prediction model in the autonomous driving discipline.
Our model, dubbed GenAD, handles the challenging dynamics in driving scenes with novel temporal reasoning blocks.
It can be adapted into an action-conditioned prediction model or a motion planner, holding great potential for real-world driving applications.
arXiv Detail & Related papers (2024-03-14T17:58:33Z) - RealGen: Retrieval Augmented Generation for Controllable Traffic Scenarios [58.62407014256686]
RealGen is a novel retrieval-based in-context learning framework for traffic scenario generation.
RealGen synthesizes new scenarios by combining behaviors from multiple retrieved examples in a gradient-free way.
This in-context learning framework endows versatile generative capabilities, including the ability to edit scenarios.
arXiv Detail & Related papers (2023-12-19T23:11:06Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - SceneGen: Learning to Generate Realistic Traffic Scenes [92.98412203941912]
We present SceneGen, a neural autoregressive model of traffic scenes that eschews the need for rules and distributions.
We demonstrate SceneGen's ability to faithfully model distributions of real traffic scenes.
arXiv Detail & Related papers (2021-01-16T22:51:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.