Generating Driving Scenes with Diffusion
- URL: http://arxiv.org/abs/2305.18452v1
- Date: Mon, 29 May 2023 04:03:46 GMT
- Title: Generating Driving Scenes with Diffusion
- Authors: Ethan Pronovost, Kai Wang, Nick Roy
- Abstract summary: We use a novel combination of diffusion and object detection to create realistic and physically plausible arrangements of discrete bounding boxes for agents.
We show that our scene generation model is able to adapt to different regions in the US, producing scenarios that capture the intricacies of each region.
- Score: 4.280988599118117
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper we describe a learned method of traffic scene generation
designed to simulate the output of the perception system of a self-driving car.
In our "Scene Diffusion" system, inspired by latent diffusion, we use a novel
combination of diffusion and object detection to directly create realistic and
physically plausible arrangements of discrete bounding boxes for agents. We
show that our scene generation model is able to adapt to different regions in
the US, producing scenarios that capture the intricacies of each region.
Related papers
- DragTraffic: Interactive and Controllable Traffic Scene Generation for Autonomous Driving [10.90477019946728]
DragTraffic is a general, interactive, and controllable traffic scene generation framework based on conditional diffusion.
We employ a regression model to provide a general initial solution and a refinement process based on the conditional diffusion model to ensure diversity.
Experiments on a real-world driving dataset show that DragTraffic outperforms existing methods in terms of authenticity, diversity, and freedom.
arXiv Detail & Related papers (2024-04-19T04:49:28Z) - WcDT: World-centric Diffusion Transformer for Traffic Scene Generation [13.616763172038846]
We introduce a novel approach for autonomous driving trajectory generation by harnessing the complementary strengths of diffusion probabilistic models and transformers.
Our proposed framework, termed the "World-Centric Diffusion Transformer"(WcDT), optimize the entire trajectory generation process.
Our results show that the proposed approach exhibits superior performance in generating both realistic and diverse trajectories.
arXiv Detail & Related papers (2024-04-02T16:28:41Z) - Scenario Diffusion: Controllable Driving Scenario Generation With
Diffusion [13.570197934493255]
We propose a novel diffusion-based architecture for generating traffic scenarios that enables controllable scenario generation.
We show that our approach has sufficient expressive capacity to model diverse traffic patterns and generalizes to different geographical regions.
arXiv Detail & Related papers (2023-11-05T19:04:25Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - A Diffusion-Model of Joint Interactive Navigation [14.689298253430568]
We present DJINN - a diffusion based method of generating traffic scenarios.
Our approach jointly diffuses the trajectories of all agents, conditioned on a flexible set of state observations from the past, present, or future.
We show how DJINN flexibly enables direct test-time sampling from a variety of valuable conditional distributions.
arXiv Detail & Related papers (2023-09-21T22:10:20Z) - DriveDreamer: Towards Real-world-driven World Models for Autonomous
Driving [76.24483706445298]
We introduce DriveDreamer, a world model entirely derived from real-world driving scenarios.
In the initial phase, DriveDreamer acquires a deep understanding of structured traffic constraints, while the subsequent stage equips it with the ability to anticipate future states.
DriveDreamer enables the generation of realistic and reasonable driving policies, opening avenues for interaction and practical applications.
arXiv Detail & Related papers (2023-09-18T13:58:42Z) - Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory
Diffusion [83.88829943619656]
We introduce a method for generating realistic pedestrian trajectories and full-body animations that can be controlled to meet user-defined goals.
Our guided diffusion model allows users to constrain trajectories through target waypoints, speed, and specified social groups.
We propose utilizing the value function learned during RL training of the animation controller to guide diffusion to produce trajectories better suited for particular scenarios.
arXiv Detail & Related papers (2023-04-04T15:46:42Z) - Learning Continuous Environment Fields via Implicit Functions [144.4913852552954]
We propose a novel scene representation that encodes reaching distance -- the distance between any position in the scene to a goal along a feasible trajectory.
We demonstrate that this environment field representation can directly guide the dynamic behaviors of agents in 2D mazes or 3D indoor scenes.
arXiv Detail & Related papers (2021-11-27T22:36:58Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - SceneGen: Learning to Generate Realistic Traffic Scenes [92.98412203941912]
We present SceneGen, a neural autoregressive model of traffic scenes that eschews the need for rules and distributions.
We demonstrate SceneGen's ability to faithfully model distributions of real traffic scenes.
arXiv Detail & Related papers (2021-01-16T22:51:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.