Animating Street View
- URL: http://arxiv.org/abs/2310.08534v1
- Date: Thu, 12 Oct 2023 17:24:05 GMT
- Title: Animating Street View
- Authors: Mengyi Shan, Brian Curless, Ira Kemelmacher-Shlizerman and Steve Seitz
- Abstract summary: We present a system that automatically brings street view imagery to life by populating it with naturally behaving, animated pedestrians and vehicles.
Our approach is to remove existing people and vehicles from the input image, insert moving objects with proper scale, angle, motion, and appearance, plan paths and traffic behavior.
We demonstrate results on a diverse range of street scenes including regular still images and panoramas.
- Score: 14.203239158327
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a system that automatically brings street view imagery to life by
populating it with naturally behaving, animated pedestrians and vehicles. Our
approach is to remove existing people and vehicles from the input image, insert
moving objects with proper scale, angle, motion, and appearance, plan paths and
traffic behavior, as well as render the scene with plausible occlusion and
shadowing effects. The system achieves these by reconstructing the still image
street scene, simulating crowd behavior, and rendering with consistent
lighting, visibility, occlusions, and shadows. We demonstrate results on a
diverse range of street scenes including regular still images and panoramas.
Related papers
- Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion [61.929653153389964]
We present a method for generating Streetscapes-long sequences of views through an on-the-fly synthesized city-scale scene.
Our method can scale to much longer-range camera trajectories, spanning several city blocks, while maintaining visual quality and consistency.
arXiv Detail & Related papers (2024-07-18T17:56:30Z) - 3D StreetUnveiler with Semantic-Aware 2DGS [66.90611944550392]
StreetUnveiler learns a 3D representation of an empty street from crowded observations.
We divide the empty street scene into observed, partial-observed, and unobserved regions.
Experiments conducted on the street scene dataset successfully reconstructed a 3D representation of the empty street.
arXiv Detail & Related papers (2024-05-28T17:57:12Z) - Erasing the Ephemeral: Joint Camera Refinement and Transient Object
Removal for Street View Synthesis [44.90761677737313]
We introduce a method that tackles challenges on view synthesis for outdoor scenarios.
We employ a neural point light field scene representation and strategically detect and mask out dynamic objects to reconstruct novel scenes without artifacts.
We demonstrate state-of-the-art results in synthesizing novel views of urban scenes.
arXiv Detail & Related papers (2023-11-29T13:51:12Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - 3D Moments from Near-Duplicate Photos [67.15199743223332]
3D Moments is a new computational photography effect.
We produce a video that smoothly interpolates the scene motion from the first photo to the second.
Our system produces photorealistic space-time videos with motion parallax and scene dynamics.
arXiv Detail & Related papers (2022-05-12T17:56:18Z) - Repopulating Street Scenes [59.2621467759251]
We present a framework for automatically reconfiguring images of street scenes by populating, depopulating, or repopulating them with objects such as pedestrians or vehicles.
Applications of this method include anonymizing images to enhance privacy, generating data augmentations for perception tasks like autonomous driving.
arXiv Detail & Related papers (2021-03-30T09:04:46Z) - People as Scene Probes [9.393640749709999]
We show how to composite new objects into the same scene with a high degree of automation and realism.
In particular, when a user places a new object (2D cut-out) in the image, it is automatically rescaled, relit, occluded properly, and casts realistic shadows in the correct direction relative to the sun.
arXiv Detail & Related papers (2020-07-17T19:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.