Future Urban Scenes Generation Through Vehicles Synthesis
- URL: http://arxiv.org/abs/2007.00323v3
- Date: Fri, 22 Oct 2021 07:54:00 GMT
- Title: Future Urban Scenes Generation Through Vehicles Synthesis
- Authors: Alessandro Simoni and Luca Bergamini and Andrea Palazzi and Simone
Calderara and Rita Cucchiara
- Abstract summary: We propose a deep learning pipeline to predict the visual future appearance of an urban scene.
We follow a two stages approach, where interpretable information is included in the loop and each actor is modelled independently.
We show the superiority of this approach over traditional end-to-end scene-generation methods on CityFlow.
- Score: 90.1731992199415
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this work we propose a deep learning pipeline to predict the visual future
appearance of an urban scene. Despite recent advances, generating the entire
scene in an end-to-end fashion is still far from being achieved. Instead, here
we follow a two stages approach, where interpretable information is included in
the loop and each actor is modelled independently. We leverage a per-object
novel view synthesis paradigm; i.e. generating a synthetic representation of an
object undergoing a geometrical roto-translation in the 3D space. Our model can
be easily conditioned with constraints (e.g. input trajectories) provided by
state-of-the-art tracking methods or by the user itself. This allows us to
generate a set of diverse realistic futures starting from the same input in a
multi-modal fashion. We visually and quantitatively show the superiority of
this approach over traditional end-to-end scene-generation methods on CityFlow,
a challenging real world dataset.
Related papers
- Urban Scene Diffusion through Semantic Occupancy Map [49.20779809250597]
UrbanDiffusion is a 3D diffusion model conditioned on a Bird's-Eye View (BEV) map.
Our model learns the data distribution of scene-level structures within a latent space.
After training on real-world driving datasets, our model can generate a wide range of diverse urban scenes.
arXiv Detail & Related papers (2024-03-18T11:54:35Z) - Generative Novel View Synthesis with 3D-Aware Diffusion Models [96.78397108732233]
We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image.
Our method makes use of existing 2D diffusion backbones but, crucially, incorporates geometry priors in the form of a 3D feature volume.
In addition to generating novel views, our method has the ability to autoregressively synthesize 3D-consistent sequences.
arXiv Detail & Related papers (2023-04-05T17:15:47Z) - Towards 3D Scene Understanding by Referring Synthetic Models [65.74211112607315]
Methods typically alleviate on-extensive annotations on real scene scans.
We explore how synthetic models rely on real scene categories of synthetic features to a unified feature space.
Experiments show that our method achieves the average mAP of 46.08% on the ScanNet S3DIS dataset and 55.49% by learning datasets.
arXiv Detail & Related papers (2022-03-20T13:06:15Z) - Long-term Human Motion Prediction with Scene Context [60.096118270451974]
We propose a novel three-stage framework for predicting human motion.
Our method first samples multiple human motion goals, then plans 3D human paths towards each goal, and finally predicts 3D human pose sequences following each path.
arXiv Detail & Related papers (2020-07-07T17:59:53Z) - Future Video Synthesis with Object Motion Prediction [54.31508711871764]
Instead of synthesizing images directly, our approach is designed to understand the complex scene dynamics.
The appearance of the scene components in the future is predicted by non-rigid deformation of the background and affine transformation of moving objects.
Experimental results on the Cityscapes and KITTI datasets show that our model outperforms the state-of-the-art in terms of visual quality and accuracy.
arXiv Detail & Related papers (2020-04-01T16:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.