READ: Large-Scale Neural Scene Rendering for Autonomous Driving
- URL: http://arxiv.org/abs/2205.05509v1
- Date: Wed, 11 May 2022 14:02:14 GMT
- Title: READ: Large-Scale Neural Scene Rendering for Autonomous Driving
- Authors: Zhuopeng Li, Lu Li, Zeyu Ma, Ping Zhang, Junbo Chen, Jianke Zhu
- Abstract summary: A large-scale neural rendering method is proposed to synthesize the autonomous driving scene.
Our model can not only synthesize realistic driving scenes but also stitch and edit driving scenes.
- Score: 21.144110676687667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing free-view photo-realistic images is an important task in
multimedia. With the development of advanced driver assistance systems~(ADAS)
and their applications in autonomous vehicles, experimenting with different
scenarios becomes a challenge. Although the photo-realistic street scenes can
be synthesized by image-to-image translation methods, which cannot produce
coherent scenes due to the lack of 3D information. In this paper, a large-scale
neural rendering method is proposed to synthesize the autonomous driving
scene~(READ), which makes it possible to synthesize large-scale driving
scenarios on a PC through a variety of sampling schemes. In order to represent
driving scenarios, we propose an {\omega} rendering network to learn neural
descriptors from sparse point clouds. Our model can not only synthesize
realistic driving scenes but also stitch and edit driving scenes. Experiments
show that our model performs well in large-scale driving scenarios.
Related papers
- DreamDrive: Generative 4D Scene Modeling from Street View Images [55.45852373799639]
We present DreamDrive, a 4D spatial-temporal scene generation approach that combines the merits of generation and reconstruction.
Specifically, we leverage the generative power of video diffusion models to synthesize a sequence of visual references.
We then render 3D-consistent driving videos via Gaussian splatting.
arXiv Detail & Related papers (2024-12-31T18:59:57Z) - Stag-1: Towards Realistic 4D Driving Simulation with Video Generation Model [83.31688383891871]
We propose a Spatial-Temporal simulAtion for drivinG (Stag-1) model to reconstruct real-world scenes.
Stag-1 constructs continuous 4D point cloud scenes using surround-view data from autonomous vehicles.
It decouples spatial-temporal relationships and produces coherent driving videos.
arXiv Detail & Related papers (2024-12-06T18:59:56Z) - FreeVS: Generative View Synthesis on Free Driving Trajectory [55.49370963413221]
FreeVS is a novel fully generative approach that can synthesize camera views on free new trajectories in real driving scenes.
FreeVS can be applied to any validation sequences without reconstruction process and synthesis views on novel trajectories.
arXiv Detail & Related papers (2024-10-23T17:59:11Z) - Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - AutoSplat: Constrained Gaussian Splatting for Autonomous Driving Scene Reconstruction [17.600027937450342]
AutoSplat is a framework employing Gaussian splatting to achieve highly realistic reconstructions of autonomous driving scenes.
Our method enables multi-view consistent simulation of challenging scenarios including lane changes.
arXiv Detail & Related papers (2024-07-02T18:36:50Z) - Urban Scene Diffusion through Semantic Occupancy Map [49.20779809250597]
UrbanDiffusion is a 3D diffusion model conditioned on a Bird's-Eye View (BEV) map.
Our model learns the data distribution of scene-level structures within a latent space.
After training on real-world driving datasets, our model can generate a wide range of diverse urban scenes.
arXiv Detail & Related papers (2024-03-18T11:54:35Z) - SceneGen: Learning to Generate Realistic Traffic Scenes [92.98412203941912]
We present SceneGen, a neural autoregressive model of traffic scenes that eschews the need for rules and distributions.
We demonstrate SceneGen's ability to faithfully model distributions of real traffic scenes.
arXiv Detail & Related papers (2021-01-16T22:51:43Z) - Photorealism in Driving Simulations: Blending Generative Adversarial
Image Synthesis with Rendering [0.0]
We introduce a hybrid generative neural graphics pipeline for improving the visual fidelity of driving simulations.
We form 2D semantic images from 3D scenery consisting of simple object models without textures.
These semantic images are then converted into photorealistic RGB images with a state-of-the-art Generative Adrial Network (GAN) trained on real-world driving scenes.
arXiv Detail & Related papers (2020-07-31T03:25:17Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.