People as Scene Probes
- URL: http://arxiv.org/abs/2007.09209v1
- Date: Fri, 17 Jul 2020 19:50:42 GMT
- Title: People as Scene Probes
- Authors: Yifan Wang, Brian Curless, Steve Seitz
- Abstract summary: We show how to composite new objects into the same scene with a high degree of automation and realism.
In particular, when a user places a new object (2D cut-out) in the image, it is automatically rescaled, relit, occluded properly, and casts realistic shadows in the correct direction relative to the sun.
- Score: 9.393640749709999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By analyzing the motion of people and other objects in a scene, we
demonstrate how to infer depth, occlusion, lighting, and shadow information
from video taken from a single camera viewpoint. This information is then used
to composite new objects into the same scene with a high degree of automation
and realism. In particular, when a user places a new object (2D cut-out) in the
image, it is automatically rescaled, relit, occluded properly, and casts
realistic shadows in the correct direction relative to the sun, and which
conform properly to scene geometry. We demonstrate results (best viewed in
supplementary video) on a range of scenes and compare to alternative methods
for depth estimation and shadow compositing.
Related papers
- Generative Omnimatte: Learning to Decompose Video into Layers [29.098471541412113]
We present a novel generative layered video decomposition framework to address the omnimatte problem.
Our core idea is to train a video diffusion model to identify and remove scene effects caused by a specific object.
We show that this model can be finetuned from an existing video inpainting model with a small, carefully curated dataset.
arXiv Detail & Related papers (2024-11-25T18:59:57Z) - Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - PhotoScene: Photorealistic Material and Lighting Transfer for Indoor
Scenes [84.66946637534089]
PhotoScene is a framework that takes input image(s) of a scene and builds a photorealistic digital twin with high-quality materials and similar lighting.
We model scene materials using procedural material graphs; such graphs represent photorealistic and resolution-independent materials.
We evaluate our technique on objects and layout reconstructions from ScanNet, SUN RGB-D and stock photographs, and demonstrate that our method reconstructs high-quality, fully relightable 3D scenes.
arXiv Detail & Related papers (2022-07-02T06:52:44Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Moving SLAM: Fully Unsupervised Deep Learning in Non-Rigid Scenes [85.56602190773684]
We build on the idea of view synthesis, which uses classical camera geometry to re-render a source image from a different point-of-view.
By minimizing the error between the synthetic image and the corresponding real image in a video, the deep network that predicts pose and depth can be trained completely unsupervised.
arXiv Detail & Related papers (2021-05-05T17:08:10Z) - A New Dimension in Testimony: Relighting Video with Reflectance Field
Exemplars [1.069384486725302]
We present a learning-based method for estimating 4D reflectance field of a person given video footage illuminated under a flat-lit environment of the same subject.
We estimate the lighting environment of the input video footage and use the subject's reflectance field to create synthetic images of the subject illuminated by the input lighting environment.
We evaluate our method on the video footage of the real Holocaust survivors and show that our method outperforms the state-of-the-art methods in both realism and speed.
arXiv Detail & Related papers (2021-04-06T20:29:06Z) - Sampling Based Scene-Space Video Processing [89.49726406622842]
We present a novel, sampling-based framework for processing video.
It enables high-quality scene-space video effects in the presence of inevitable errors in depth and camera pose estimation.
We present results for various casually captured, hand-held, moving, compressed, monocular videos.
arXiv Detail & Related papers (2021-02-05T05:55:04Z) - RELATE: Physically Plausible Multi-Object Scene Synthesis Using
Structured Latent Spaces [77.07767833443256]
We present RELATE, a model that learns to generate physically plausible scenes and videos of multiple interacting objects.
In contrast to state-of-the-art methods in object-centric generative modeling, RELATE also extends naturally to dynamic scenes and generates videos of high visual fidelity.
arXiv Detail & Related papers (2020-07-02T17:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.