Shape, Light & Material Decomposition from Images using Monte Carlo
Rendering and Denoising
- URL: http://arxiv.org/abs/2206.03380v1
- Date: Tue, 7 Jun 2022 15:19:18 GMT
- Title: Shape, Light & Material Decomposition from Images using Monte Carlo
Rendering and Denoising
- Authors: Jon Hasselgren, Nikolai Hofmann and Jacob Munkberg
- Abstract summary: We show that a more realistic shading model, incorporating ray tracing and Monte Carlo integration, substantially improves decomposition into shape, materials & lighting.
We incorporate multiple importance sampling and denoising in a novel inverse rendering pipeline.
This substantially improves convergence and enables gradient-based optimization at low sample counts.
- Score: 0.7366405857677225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in differentiable rendering have enabled high-quality
reconstruction of 3D scenes from multi-view images. Most methods rely on simple
rendering algorithms: pre-filtered direct lighting or learned representations
of irradiance. We show that a more realistic shading model, incorporating ray
tracing and Monte Carlo integration, substantially improves decomposition into
shape, materials & lighting. Unfortunately, Monte Carlo integration provides
estimates with significant noise, even at large sample counts, which makes
gradient-based inverse rendering very challenging. To address this, we
incorporate multiple importance sampling and denoising in a novel inverse
rendering pipeline. This substantially improves convergence and enables
gradient-based optimization at low sample counts. We present an efficient
method to jointly reconstruct geometry (explicit triangle meshes), materials,
and lighting, which substantially improves material and light separation
compared to previous work. We argue that denoising can become an integral part
of high quality inverse rendering pipelines.
Related papers
- RelitLRM: Generative Relightable Radiance for Large Reconstruction Models [52.672706620003765]
We propose RelitLRM for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations.
Unlike prior inverse rendering methods requiring dense captures and slow optimization, RelitLRM adopts a feed-forward transformer-based model.
We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines.
arXiv Detail & Related papers (2024-10-08T17:40:01Z) - MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling [17.435649250309904]
We present MIRReS, a novel two-stage inverse rendering framework.
Our method extracts an explicit geometry (triangular mesh) in stage one, and introduces a more realistic physically-based inverse rendering model.
Our method effectively estimates indirect illumination, including self-shadowing and internal reflections.
arXiv Detail & Related papers (2024-06-24T07:00:57Z) - DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading [50.331929164207324]
We introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading.
Both qualitative and quantitative experiments demonstrate the superior performance of DeferredGS in novel view and editing tasks.
arXiv Detail & Related papers (2024-04-15T01:58:54Z) - Denoising Monte Carlo Renders with Diffusion Models [5.228564799458042]
Physically-based renderings contain Monte-Carlo noise, with variance that increases as the number of rays per pixel decreases.
This noise, while zero-mean for good moderns, can have heavy tails.
We demonstrate that a diffusion model can denoise low fidelity renders successfully.
arXiv Detail & Related papers (2024-03-30T23:19:40Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.