Shape From Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF
Material from Images via Differentiable Path Tracing
- URL: http://arxiv.org/abs/2012.03939v1
- Date: Sun, 6 Dec 2020 18:55:35 GMT
- Title: Shape From Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF
Material from Images via Differentiable Path Tracing
- Authors: Purvi Goel, Loudon Cohen, James Guesman, Vikas Thamizharasan, James
Tompkin, Daniel Ritchie
- Abstract summary: Differentiable path tracing is an appealing framework as it can reproduce complex appearance effects.
We show how to use differentiable ray tracing to refine an initial coarse mesh and per-mesh-facet material representation.
We also show how to refine initial reconstructions of real-world objects in unconstrained environments.
- Score: 16.975014467319443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing object geometry and material from multiple views typically
requires optimization. Differentiable path tracing is an appealing framework as
it can reproduce complex appearance effects. However, it is difficult to use
due to high computational cost. In this paper, we explore how to use
differentiable ray tracing to refine an initial coarse mesh and per-mesh-facet
material representation. In simulation, we find that it is possible to
reconstruct fine geometric and material detail from low resolution input views,
allowing high-quality reconstructions in a few hours despite the expense of
path tracing. The reconstructions successfully disambiguate shading, shadow,
and global illumination effects such as diffuse interreflection from material
properties. We demonstrate the impact of different geometry initializations,
including space carving, multi-view stereo, and 3D neural networks. Finally,
with input captured using smartphone video and a consumer 360? camera for
lighting estimation, we also show how to refine initial reconstructions of
real-world objects in unconstrained environments.
Related papers
- Triplet: Triangle Patchlet for Mesh-Based Inverse Rendering and Scene Parameters Approximation [0.0]
inverse rendering seeks to derive the physical properties of a scene, including light, geometry, textures, and materials.
Meshes, as a traditional representation adopted by many simulation pipeline, still show limited influence in radiance field for inverse rendering.
This paper introduces a novel framework called Triangle Patchlet (abbr. Triplet), a mesh-based representation, to comprehensively approximate these parameters.
arXiv Detail & Related papers (2024-10-16T09:59:11Z) - Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction [51.3632308129838]
We present Total-Decom, a novel method for decomposed 3D reconstruction with minimal human interaction.
Our approach seamlessly integrates the Segment Anything Model (SAM) with hybrid implicit-explicit neural surface representations and a mesh-based region-growing technique for accurate 3D object decomposition.
We extensively evaluate our method on benchmark datasets and demonstrate its potential for downstream applications, such as animation and scene editing.
arXiv Detail & Related papers (2024-03-28T11:12:33Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - Multi-View Neural Surface Reconstruction with Structured Light [7.709526244898887]
Three-dimensional (3D) object reconstruction based on differentiable rendering (DR) is an active research topic in computer vision.
We introduce active sensing with structured light (SL) into multi-view 3D object reconstruction based on DR to learn the unknown geometry and appearance of arbitrary scenes and camera poses.
Our method realizes high reconstruction accuracy in the textureless region and reduces efforts for camera pose calibration.
arXiv Detail & Related papers (2022-11-22T03:10:46Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data [77.53134858717728]
We build on the strengths of recent advances in neural reconstruction and rendering such as Neural Radiance Fields (NeRF)
We apply a soft symmetry constraint to the 3D geometry and material properties, having factored appearance into lighting, albedo colour and reflectivity.
We show that it can reconstruct unobserved regions with high fidelity and render high-quality novel view images.
arXiv Detail & Related papers (2022-06-13T17:37:50Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Through the Looking Glass: Neural 3D Reconstruction of Transparent
Shapes [75.63464905190061]
Complex light paths induced by refraction and reflection have prevented both traditional and deep multiview stereo from solving this problem.
We propose a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera.
Our experiments show successful recovery of high-quality 3D geometry for complex transparent shapes using as few as 5-12 natural images.
arXiv Detail & Related papers (2020-04-22T23:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.