IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images
- URL: http://arxiv.org/abs/2204.02232v1
- Date: Tue, 5 Apr 2022 14:14:18 GMT
- Title: IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images
- Authors: Kai Zhang and Fujun Luan and Zhengqi Li and Noah Snavely
- Abstract summary: We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content.
Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness.
We show that our IRON achieves significantly better inverse rendering quality compared to prior works.
- Score: 52.021529273866896
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a neural inverse rendering pipeline called IRON that operates on
photometric images and outputs high-quality 3D content in the format of
triangle meshes and material textures readily deployable in existing graphics
pipelines. Our method adopts neural representations for geometry as signed
distance fields (SDFs) and materials during optimization to enjoy their
flexibility and compactness, and features a hybrid optimization scheme for
neural SDFs: first, optimize using a volumetric radiance field approach to
recover correct topology, then optimize further using edgeaware physics-based
surface rendering for geometry refinement and disentanglement of materials and
lighting. In the second stage, we also draw inspiration from mesh-based
differentiable rendering, and design a novel edge sampling algorithm for neural
SDFs to further improve performance. We show that our IRON achieves
significantly better inverse rendering quality compared to prior works. Our
project page is here: https://kai-46.github.io/IRON-website/
Related papers
- RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - GaussianRoom: Improving 3D Gaussian Splatting with SDF Guidance and Monocular Cues for Indoor Scene Reconstruction [3.043712258792239]
We present a unified framework integrating neural SDF with 3DGS.
This framework incorporates a learnable neural SDF field to guide the densification and pruning of Gaussians.
Our method achieves state-of-the-art performance in both surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-05-30T03:46:59Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.