PhotoScene: Photorealistic Material and Lighting Transfer for Indoor
Scenes
- URL: http://arxiv.org/abs/2207.00757v1
- Date: Sat, 2 Jul 2022 06:52:44 GMT
- Title: PhotoScene: Photorealistic Material and Lighting Transfer for Indoor
Scenes
- Authors: Yu-Ying Yeh, Zhengqin Li, Yannick Hold-Geoffroy, Rui Zhu, Zexiang Xu,
Milo\v{s} Ha\v{s}an, Kalyan Sunkavalli, Manmohan Chandraker
- Abstract summary: PhotoScene is a framework that takes input image(s) of a scene and builds a photorealistic digital twin with high-quality materials and similar lighting.
We model scene materials using procedural material graphs; such graphs represent photorealistic and resolution-independent materials.
We evaluate our technique on objects and layout reconstructions from ScanNet, SUN RGB-D and stock photographs, and demonstrate that our method reconstructs high-quality, fully relightable 3D scenes.
- Score: 84.66946637534089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most indoor 3D scene reconstruction methods focus on recovering 3D geometry
and scene layout. In this work, we go beyond this to propose PhotoScene, a
framework that takes input image(s) of a scene along with approximately aligned
CAD geometry (either reconstructed automatically or manually specified) and
builds a photorealistic digital twin with high-quality materials and similar
lighting. We model scene materials using procedural material graphs; such
graphs represent photorealistic and resolution-independent materials. We
optimize the parameters of these graphs and their texture scale and rotation,
as well as the scene lighting to best match the input image via a
differentiable rendering layer. We evaluate our technique on objects and layout
reconstructions from ScanNet, SUN RGB-D and stock photographs, and demonstrate
that our method reconstructs high-quality, fully relightable 3D scenes that can
be re-rendered under arbitrary viewpoints, zooms and lighting.
Related papers
- Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - PSDR-Room: Single Photo to Scene using Differentiable Rendering [18.23851486874071]
A 3D digital scene contains many components: lights, materials and geometries, interacting to reach the desired appearance.
We propose PSDR-Room, a system allowing to optimize lighting as well as the pose and materials of individual objects to match a target image of a room scene, with minimal user input.
arXiv Detail & Related papers (2023-07-06T18:17:59Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.