Extracting Triangular 3D Models, Materials, and Lighting From Images
- URL: http://arxiv.org/abs/2111.12503v5
- Date: Tue, 11 Apr 2023 07:05:24 GMT
- Title: Extracting Triangular 3D Models, Materials, and Lighting From Images
- Authors: Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng
Chen, Alex Evans, Thomas M\"uller, Sanja Fidler
- Abstract summary: We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
- Score: 59.33666140713829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an efficient method for joint optimization of topology, materials
and lighting from multi-view image observations. Unlike recent multi-view
reconstruction approaches, which typically produce entangled 3D representations
encoded in neural networks, we output triangle meshes with spatially-varying
materials and environment lighting that can be deployed in any traditional
graphics engine unmodified. We leverage recent work in differentiable
rendering, coordinate-based networks to compactly represent volumetric
texturing, alongside differentiable marching tetrahedrons to enable
gradient-based optimization directly on the surface mesh. Finally, we introduce
a differentiable formulation of the split sum approximation of environment
lighting to efficiently recover all-frequency lighting. Experiments show our
extracted models used in advanced scene editing, material decomposition, and
high quality view interpolation, all running at interactive rates in
triangle-based renderers (rasterizers and path tracers). Project website:
https://nvlabs.github.io/nvdiffrec/ .
Related papers
- Triplet: Triangle Patchlet for Mesh-Based Inverse Rendering and Scene Parameters Approximation [0.0]
inverse rendering seeks to derive the physical properties of a scene, including light, geometry, textures, and materials.
Meshes, as a traditional representation adopted by many simulation pipeline, still show limited influence in radiance field for inverse rendering.
This paper introduces a novel framework called Triangle Patchlet (abbr. Triplet), a mesh-based representation, to comprehensively approximate these parameters.
arXiv Detail & Related papers (2024-10-16T09:59:11Z) - FlexiDreamer: Single Image-to-3D Generation with FlexiCubes [20.871847154995688]
FlexiDreamer is a novel framework that directly reconstructs high-quality meshes from multi-view generated images.
Our approach can generate high-fidelity 3D meshes in the single image-to-3D downstream task with approximately 1 minute.
arXiv Detail & Related papers (2024-04-01T08:20:18Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing [21.498078188364566]
We present a novel differentiable point-based rendering framework to achieve photo-realistic relighting.
The proposed framework showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.
arXiv Detail & Related papers (2023-11-27T18:07:58Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images [52.021529273866896]
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content.
Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness.
We show that our IRON achieves significantly better inverse rendering quality compared to prior works.
arXiv Detail & Related papers (2022-04-05T14:14:18Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.