Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo
Rendering
- URL: http://arxiv.org/abs/2103.15208v1
- Date: Sun, 28 Mar 2021 19:44:05 GMT
- Title: Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo
Rendering
- Authors: Fujun Luan, Shuang Zhao, Kavita Bala, Zhao Dong
- Abstract summary: We introduce a new analysis-by-synthesis technique capable of producing high-quality reconstructions.
Unlike most previous methods that handle geometry and reflectance largely separately, our method unifies the optimization of both.
To obtain physically accurate gradient estimates, we develop a new GPU-based Monte Carlo differentiable rendering theory.
- Score: 20.68222611798537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing the shape and appearance of real-world objects using measured
2D images has been a long-standing problem in computer vision. In this paper,
we introduce a new analysis-by-synthesis technique capable of producing
high-quality reconstructions through robust coarse-to-fine optimization and
physics-based differentiable rendering.
Unlike most previous methods that handle geometry and reflectance largely
separately, our method unifies the optimization of both by leveraging image
gradients with respect to both object reflectance and geometry. To obtain
physically accurate gradient estimates, we develop a new GPU-based Monte Carlo
differentiable renderer leveraging recent advances in differentiable rendering
theory to offer unbiased gradients while enjoying better performance than
existing tools like PyTorch3D and redner. To further improve robustness, we
utilize several shape and material priors as well as a coarse-to-fine
optimization strategy to reconstruct geometry. We demonstrate that our
technique can produce reconstructions with higher quality than previous methods
such as COLMAP and Kinect Fusion.
Related papers
- GlossyGS: Inverse Rendering of Glossy Objects with 3D Gaussian Splatting [21.23724172779984]
GlossyGS aims to precisely reconstruct the geometry and materials of glossy objects by integrating material priors.
We demonstrate through quantitative analysis and qualitative visualization that the proposed method is effective to reconstruct high-fidelity geometries and materials of glossy objects.
arXiv Detail & Related papers (2024-10-17T09:00:29Z) - AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - $R^2$-Mesh: Reinforcement Learning Powered Mesh Reconstruction via Geometry and Appearance Refinement [5.810659946867557]
Mesh reconstruction based on Neural Radiance Fields (NeRF) is popular in a variety of applications such as computer graphics, virtual reality, and medical imaging.
We propose a novel algorithm that progressively generates and optimize meshes from multi-view images.
Our method delivers highly competitive and robust performance in both mesh rendering quality and geometric quality.
arXiv Detail & Related papers (2024-08-19T16:33:17Z) - GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - Efficient Multi-View Inverse Rendering Using a Hybrid Differentiable
Rendering Method [19.330797817738542]
We introduce a novel hybrid differentiable rendering method to efficiently reconstruct the 3D geometry and reflectance of a scene.
Our method can produce reconstructions with similar or higher quality than state-of-the-art methods while being more efficient.
arXiv Detail & Related papers (2023-08-19T12:48:10Z) - Gradient-Based Geometry Learning for Fan-Beam CT Reconstruction [7.04200827802994]
Differentiable formulation of fan-beam CT reconstruction is extended to acquisition geometry.
As a proof-of-concept experiment, this idea is applied to rigid motion compensation.
Algorithm achieves a reduction in MSE by 35.5 % and an improvement in SSIM by 12.6 % over the motion affected reconstruction.
arXiv Detail & Related papers (2022-12-05T11:18:52Z) - Adaptive Joint Optimization for 3D Reconstruction with Differentiable
Rendering [22.2095090385119]
Given an imperfect reconstructed 3D model, most previous methods have focused on the refinement of either geometry, texture, or camera pose.
We propose a novel optimization approach based on differentiable rendering, which integrates the optimization of camera pose, geometry, and texture into a unified framework.
Using differentiable rendering, an image-level adversarial loss is applied to further improve the 3D model, making it more photorealistic.
arXiv Detail & Related papers (2022-08-15T04:32:41Z) - IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images [52.021529273866896]
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content.
Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness.
We show that our IRON achieves significantly better inverse rendering quality compared to prior works.
arXiv Detail & Related papers (2022-04-05T14:14:18Z) - SIDER: Single-Image Neural Optimization for Facial Geometric Detail
Recovery [54.64663713249079]
SIDER is a novel photometric optimization method that recovers detailed facial geometry from a single image in an unsupervised manner.
In contrast to prior work, SIDER does not rely on any dataset priors and does not require additional supervision from multiple views, lighting changes or ground truth 3D shape.
arXiv Detail & Related papers (2021-08-11T22:34:53Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.