Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting
- URL: http://arxiv.org/abs/2109.06061v1
- Date: Mon, 13 Sep 2021 15:29:03 GMT
- Title: Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting
- Authors: Zian Wang, Jonah Philion, Sanja Fidler, Jan Kautz
- Abstract summary: We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
- Score: 149.1673041605155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we address the problem of jointly estimating albedo, normals,
depth and 3D spatially-varying lighting from a single image. Most existing
methods formulate the task as image-to-image translation, ignoring the 3D
properties of the scene. However, indoor scenes contain complex 3D light
transport where a 2D representation is insufficient. In this paper, we propose
a unified, learning-based inverse rendering framework that formulates 3D
spatially-varying lighting. Inspired by classic volume rendering techniques, we
propose a novel Volumetric Spherical Gaussian representation for lighting,
which parameterizes the exitant radiance of the 3D scene surfaces on a voxel
grid. We design a physics based differentiable renderer that utilizes our 3D
lighting representation, and formulates the energy-conserving image formation
process that enables joint training of all intrinsic properties with the
re-rendering constraint. Our model ensures physically correct predictions and
avoids the need for ground-truth HDR lighting which is not easily accessible.
Experiments show that our method outperforms prior works both quantitatively
and qualitatively, and is capable of producing photorealistic results for AR
applications such as virtual object insertion even for highly specular objects.
Related papers
- Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - Fantasia3D: Disentangling Geometry and Appearance for High-quality
Text-to-3D Content Creation [45.69270771487455]
We propose a new method of Fantasia3D for high-quality text-to-3D content creation.
Key to Fantasia3D is the disentangled modeling and learning of geometry and appearance.
Our framework is more compatible with popular graphics engines, supporting relighting, editing, and physical simulation of the generated 3D assets.
arXiv Detail & Related papers (2023-03-24T09:30:09Z) - 3D Scene Creation and Rendering via Rough Meshes: A Lighting Transfer Avenue [49.62477229140788]
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering.
We propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other.
arXiv Detail & Related papers (2022-11-27T13:31:00Z) - TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting
Decomposition [39.312567993736025]
We propose TANGO, which transfers the appearance style of a given 3D shape according to a text prompt in a photorealistic manner.
We show that TANGO outperforms existing methods of text-driven 3D style transfer in terms of photorealistic quality, consistency of 3D geometry, and robustness when stylizing low-quality meshes.
arXiv Detail & Related papers (2022-10-20T13:52:18Z) - GAN2X: Non-Lambertian Inverse Rendering of Image GANs [85.76426471872855]
We present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training.
Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN.
Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction.
arXiv Detail & Related papers (2022-06-18T16:58:49Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - 3D-GIF: 3D-Controllable Object Generation via Implicit Factorized
Representations [31.095503715696722]
We propose the factorized representations which are view-independent and light-disentangled, and training schemes with randomly sampled light conditions.
We demonstrate the superiority of our method by visualizing factorized representations, re-lighted images, and albedo-textured meshes.
This is the first work that extracts albedo-textured meshes with unposed 2D images without any additional labels or assumptions.
arXiv Detail & Related papers (2022-03-12T15:23:17Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.