Repopulating Street Scenes
- URL: http://arxiv.org/abs/2103.16183v1
- Date: Tue, 30 Mar 2021 09:04:46 GMT
- Title: Repopulating Street Scenes
- Authors: Yifan Wang, Andrew Liu, Richard Tucker, Jiajun Wu, Brian L. Curless,
Steven M. Seitz, Noah Snavely
- Abstract summary: We present a framework for automatically reconfiguring images of street scenes by populating, depopulating, or repopulating them with objects such as pedestrians or vehicles.
Applications of this method include anonymizing images to enhance privacy, generating data augmentations for perception tasks like autonomous driving.
- Score: 59.2621467759251
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a framework for automatically reconfiguring images of street
scenes by populating, depopulating, or repopulating them with objects such as
pedestrians or vehicles. Applications of this method include anonymizing images
to enhance privacy, generating data augmentations for perception tasks like
autonomous driving, and composing scenes to achieve a certain ambiance, such as
empty streets in the early morning. At a technical level, our work has three
primary contributions: (1) a method for clearing images of objects, (2) a
method for estimating sun direction from a single image, and (3) a way to
compose objects in scenes that respects scene geometry and illumination. Each
component is learned from data with minimal ground truth annotations, by making
creative use of large-numbers of short image bursts of street scenes. We
demonstrate convincing results on a range of street scenes and illustrate
potential applications.
Related papers
- Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion [61.929653153389964]
We present a method for generating Streetscapes-long sequences of views through an on-the-fly synthesized city-scale scene.
Our method can scale to much longer-range camera trajectories, spanning several city blocks, while maintaining visual quality and consistency.
arXiv Detail & Related papers (2024-07-18T17:56:30Z) - 3D StreetUnveiler with Semantic-Aware 2DGS [66.90611944550392]
StreetUnveiler learns a 3D representation of an empty street from crowded observations.
We divide the empty street scene into observed, partial-observed, and unobserved regions.
Experiments conducted on the street scene dataset successfully reconstructed a 3D representation of the empty street.
arXiv Detail & Related papers (2024-05-28T17:57:12Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - Reconstructing Continuous Light Field From Single Coded Image [7.937367109582907]
We propose a method for reconstructing a continuous light field of a target scene from a single observed image.
Joint aperture-exposure coding implemented in a camera enables effective embedding of 3-D scene information into an observed image.
NeRF-based neural rendering enables high quality view synthesis of a 3-D scene from continuous viewpoints.
arXiv Detail & Related papers (2023-11-16T07:59:01Z) - Animating Street View [14.203239158327]
We present a system that automatically brings street view imagery to life by populating it with naturally behaving, animated pedestrians and vehicles.
Our approach is to remove existing people and vehicles from the input image, insert moving objects with proper scale, angle, motion, and appearance, plan paths and traffic behavior.
We demonstrate results on a diverse range of street scenes including regular still images and panoramas.
arXiv Detail & Related papers (2023-10-12T17:24:05Z) - PSDR-Room: Single Photo to Scene using Differentiable Rendering [18.23851486874071]
A 3D digital scene contains many components: lights, materials and geometries, interacting to reach the desired appearance.
We propose PSDR-Room, a system allowing to optimize lighting as well as the pose and materials of individual objects to match a target image of a room scene, with minimal user input.
arXiv Detail & Related papers (2023-07-06T18:17:59Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Sampling Based Scene-Space Video Processing [89.49726406622842]
We present a novel, sampling-based framework for processing video.
It enables high-quality scene-space video effects in the presence of inevitable errors in depth and camera pose estimation.
We present results for various casually captured, hand-held, moving, compressed, monocular videos.
arXiv Detail & Related papers (2021-02-05T05:55:04Z) - Predicting Semantic Map Representations from Images using Pyramid
Occupancy Networks [27.86228863466213]
We present a simple, unified approach for estimating maps directly from monocular images using a single end-to-end deep learning architecture.
We demonstrate the effectiveness of our approach by evaluating against several challenging baselines on the NuScenes and Argoverse datasets.
arXiv Detail & Related papers (2020-03-30T12:39:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.