OASim: an Open and Adaptive Simulator based on Neural Rendering for
Autonomous Driving
- URL: http://arxiv.org/abs/2402.03830v1
- Date: Tue, 6 Feb 2024 09:19:44 GMT
- Title: OASim: an Open and Adaptive Simulator based on Neural Rendering for
Autonomous Driving
- Authors: Guohang Yan, Jiahao Pi, Jianfei Guo, Zhaotong Luo, Min Dou, Nianchen
Deng, Qiusheng Huang, Daocheng Fu, Licheng Wen, Pinlong Cai, Xing Gao, Xinyu
Cai, Bo Zhang, Xuemeng Yang, Yeqi Bai, Hongbin Zhou, Botian Shi
- Abstract summary: OASim is an open and adaptive simulator and autonomous driving data generator based on implicit neural rendering.
Data plays a core role in the algorithm closed-loop system, but collecting real-world data is expensive, time-consuming, and unsafe.
- Score: 11.682732129252118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With deep learning and computer vision technology development, autonomous
driving provides new solutions to improve traffic safety and efficiency. The
importance of building high-quality datasets is self-evident, especially with
the rise of end-to-end autonomous driving algorithms in recent years. Data
plays a core role in the algorithm closed-loop system. However, collecting
real-world data is expensive, time-consuming, and unsafe. With the development
of implicit rendering technology and in-depth research on using generative
models to produce data at scale, we propose OASim, an open and adaptive
simulator and autonomous driving data generator based on implicit neural
rendering. It has the following characteristics: (1) High-quality scene
reconstruction through neural implicit surface reconstruction technology. (2)
Trajectory editing of the ego vehicle and participating vehicles. (3) Rich
vehicle model library that can be freely selected and inserted into the scene.
(4) Rich sensors model library where you can select specified sensors to
generate data. (5) A highly customizable data generation system can generate
data according to user needs. We demonstrate the high quality and fidelity of
the generated data through perception performance evaluation on the Carla
simulator and real-world data acquisition. Code is available at
https://github.com/PJLab-ADG/OASim.
Related papers
- Solving Motion Planning Tasks with a Scalable Generative Model [15.858076912795621]
We present an efficient solution based on generative models which learns the dynamics of the driving scenes.
Our innovative design allows the model to operate in both full-Autoregressive and partial-Autoregressive modes.
We conclude that the proposed generative model may serve as a foundation for a variety of motion planning tasks.
arXiv Detail & Related papers (2024-07-03T03:57:05Z) - SimGen: Simulator-conditioned Driving Scene Generation [50.03358485083602]
We introduce a simulator-conditioned scene generation framework called SimGen.
SimGen learns to generate diverse driving scenes by mixing data from the simulator and the real world.
It achieves superior generation quality and diversity while preserving controllability based on the text prompt and the layout pulled from a simulator.
arXiv Detail & Related papers (2024-06-13T17:58:32Z) - SCaRL- A Synthetic Multi-Modal Dataset for Autonomous Driving [0.0]
We present a novel synthetically generated multi-modal dataset, SCaRL, to enable the training and validation of autonomous driving solutions.
SCaRL is a large dataset based on the CARLA Simulator, which provides data for diverse, dynamic scenarios and traffic conditions.
arXiv Detail & Related papers (2024-05-27T10:31:26Z) - SubjectDrive: Scaling Generative Data in Autonomous Driving via Subject Control [59.20038082523832]
We present SubjectDrive, the first model proven to scale generative data production in a way that could continuously improve autonomous driving applications.
We develop a novel model equipped with a subject control mechanism, which allows the generative model to leverage diverse external data sources for producing varied and useful data.
arXiv Detail & Related papers (2024-03-28T14:07:13Z) - Open-sourced Data Ecosystem in Autonomous Driving: the Present and Future [130.87142103774752]
This review systematically assesses over seventy open-source autonomous driving datasets.
It offers insights into various aspects, such as the principles underlying the creation of high-quality datasets.
It also delves into the scientific and technical challenges that warrant resolution.
arXiv Detail & Related papers (2023-12-06T10:46:53Z) - Development of a Realistic Crowd Simulation Environment for Fine-grained
Validation of People Tracking Methods [0.7223361655030193]
This work develops an extension of crowd simulation (named CrowdSim2) and prove its usability in the application of people-tracking algorithms.
The simulator is developed using the very popular Unity 3D engine with particular emphasis on the aspects of realism in the environment.
Three methods of tracking were used to validate generated dataset: IOU-Tracker, Deep-Sort, and Deep-TAMA.
arXiv Detail & Related papers (2023-04-26T09:29:58Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - CARNet: A Dynamic Autoencoder for Learning Latent Dynamics in Autonomous
Driving Tasks [11.489187712465325]
An autonomous driving system should effectively use the information collected from the various sensors in order to form an abstract description of the world.
Deep learning models, such as autoencoders, can be used for that purpose, as they can learn compact latent representations from a stream of incoming data.
This work proposes CARNet, a Combined dynAmic autoencodeR NETwork architecture that utilizes an autoencoder combined with a recurrent neural network to learn the current latent representation.
arXiv Detail & Related papers (2022-05-18T04:15:42Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.