RelitLRM: Generative Relightable Radiance for Large Reconstruction Models
- URL: http://arxiv.org/abs/2410.06231v2
- Date: Thu, 10 Oct 2024 05:41:49 GMT
- Title: RelitLRM: Generative Relightable Radiance for Large Reconstruction Models
- Authors: Tianyuan Zhang, Zhengfei Kuang, Haian Jin, Zexiang Xu, Sai Bi, Hao Tan, He Zhang, Yiwei Hu, Milos Hasan, William T. Freeman, Kai Zhang, Fujun Luan,
- Abstract summary: We propose RelitLRM for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations.
Unlike prior inverse rendering methods requiring dense captures and slow optimization, RelitLRM adopts a feed-forward transformer-based model.
We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines.
- Score: 52.672706620003765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose RelitLRM, a Large Reconstruction Model (LRM) for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations from sparse (4-8) posed images captured under unknown static lighting. Unlike prior inverse rendering methods requiring dense captures and slow optimization, often causing artifacts like incorrect highlights or shadow baking, RelitLRM adopts a feed-forward transformer-based model with a novel combination of a geometry reconstructor and a relightable appearance generator based on diffusion. The model is trained end-to-end on synthetic multi-view renderings of objects under varying known illuminations. This architecture design enables to effectively decompose geometry and appearance, resolve the ambiguity between material and lighting, and capture the multi-modal distribution of shadows and specularity in the relit appearance. We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines while being significantly faster. Our project page is available at: https://relit-lrm.github.io/.
Related papers
- GUS-IR: Gaussian Splatting with Unified Shading for Inverse Rendering [83.69136534797686]
We present GUS-IR, a novel framework designed to address the inverse rendering problem for complicated scenes featuring rough and glossy surfaces.
This paper starts by analyzing and comparing two prominent shading techniques popularly used for inverse rendering, forward shading and deferred shading.
We propose a unified shading solution that combines the advantages of both techniques for better decomposition.
arXiv Detail & Related papers (2024-11-12T01:51:05Z) - MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling [17.435649250309904]
We present MIRReS, a novel two-stage inverse rendering framework.
Our method extracts an explicit geometry (triangular mesh) in stage one, and introduces a more realistic physically-based inverse rendering model.
Our method effectively estimates indirect illumination, including self-shadowing and internal reflections.
arXiv Detail & Related papers (2024-06-24T07:00:57Z) - GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing [21.498078188364566]
We present a novel differentiable point-based rendering framework to achieve photo-realistic relighting.
The proposed framework showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.
arXiv Detail & Related papers (2023-11-27T18:07:58Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.