Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes
- URL: http://arxiv.org/abs/2211.10206v4
- Date: Tue, 21 Mar 2023 07:50:56 GMT
- Title: Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes
- Authors: Zhen Li, Lingli Wang, Mofang Cheng, Cihui Pan, Jiaqi Yang
- Abstract summary: We present a efficient multi-view inverse rendering method for large-scale real-world indoor scenes.
The proposed method outperforms the state-of-the-art quantitatively and qualitatively.
It enables physically-reasonable mixed-reality applications such as material editing, editable novel view synthesis and relighting.
- Score: 5.9870673031762545
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a efficient multi-view inverse rendering method for large-scale
real-world indoor scenes that reconstructs global illumination and
physically-reasonable SVBRDFs. Unlike previous representations, where the
global illumination of large scenes is simplified as multiple environment maps,
we propose a compact representation called Texture-based Lighting (TBL). It
consists of 3D mesh and HDR textures, and efficiently models direct and
infinite-bounce indirect lighting of the entire large scene. Based on TBL, we
further propose a hybrid lighting representation with precomputed irradiance,
which significantly improves the efficiency and alleviates the rendering noise
in the material optimization. To physically disentangle the ambiguity between
materials, we propose a three-stage material optimization strategy based on the
priors of semantic segmentation and room segmentation. Extensive experiments
show that the proposed method outperforms the state-of-the-art quantitatively
and qualitatively, and enables physically-reasonable mixed-reality applications
such as material editing, editable novel view synthesis and relighting. The
project page is at https://lzleejean.github.io/TexIR.
Related papers
- RelitLRM: Generative Relightable Radiance for Large Reconstruction Models [52.672706620003765]
We propose RelitLRM for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations.
Unlike prior inverse rendering methods requiring dense captures and slow optimization, RelitLRM adopts a feed-forward transformer-based model.
We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines.
arXiv Detail & Related papers (2024-10-08T17:40:01Z) - UniVoxel: Fast Inverse Rendering by Unified Voxelization of Scene Representation [66.95976870627064]
We design a Unified Voxelization framework for explicit learning of scene representations, dubbed UniVoxel.
We propose to encode a scene into a latent volumetric representation, based on which the geometry, materials and illumination can be readily learned via lightweight neural networks.
Experiments show that UniVoxel boosts the optimization efficiency significantly compared to other methods, reducing the per-scene training time from hours to 18 minutes, while achieving favorable reconstruction quality.
arXiv Detail & Related papers (2024-07-28T17:24:14Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing [21.498078188364566]
We present a novel differentiable point-based rendering framework to achieve photo-realistic relighting.
The proposed framework showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.
arXiv Detail & Related papers (2023-11-27T18:07:58Z) - Holistic Inverse Rendering of Complex Facade via Aerial 3D Scanning [38.72679977945778]
We use multi-view aerial images to reconstruct the geometry, lighting, and material of facades using neural signed distance fields (SDFs)
The experiment demonstrates the superior quality of our method on facade holistic inverse rendering, novel view synthesis, and scene editing compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-11-20T15:03:56Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - NeILF: Neural Incident Light Field for Physically-based Material
Estimation [31.230609753253713]
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry.
In the framework, we represent scene lightings as the Neural Incident Light Field (NeILF) and material properties as the surface BRDF modelled by multi-layer perceptrons.
arXiv Detail & Related papers (2022-03-14T15:23:04Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.