Rethinking Directional Parameterization in Neural Implicit Surface Reconstruction
- URL: http://arxiv.org/abs/2409.06923v1
- Date: Wed, 11 Sep 2024 00:44:31 GMT
- Title: Rethinking Directional Parameterization in Neural Implicit Surface Reconstruction
- Authors: Zijie Jiang, Tianhan Xu, Hiroharu Kato,
- Abstract summary: Multi-view 3D surface reconstruction using neural implicit representations has made notable progress.
However, their effectiveness in reconstructing objects with specular or complex surfaces is typically biased by the directional parameterization used in their view-dependent radiance network.
We propose a novel hybrid directional parameterization to address their limitations in a unified form.
- Score: 7.970528879271508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view 3D surface reconstruction using neural implicit representations has made notable progress by modeling the geometry and view-dependent radiance fields within a unified framework. However, their effectiveness in reconstructing objects with specular or complex surfaces is typically biased by the directional parameterization used in their view-dependent radiance network. {\it Viewing direction} and {\it reflection direction} are the two most commonly used directional parameterizations but have their own limitations. Typically, utilizing the viewing direction usually struggles to correctly decouple the geometry and appearance of objects with highly specular surfaces, while using the reflection direction tends to yield overly smooth reconstructions for concave or complex structures. In this paper, we analyze their failed cases in detail and propose a novel hybrid directional parameterization to address their limitations in a unified form. Extensive experiments demonstrate the proposed hybrid directional parameterization consistently delivered satisfactory results in reconstructing objects with a wide variety of materials, geometry and appearance, whereas using other directional parameterizations faces challenges in reconstructing certain objects. Moreover, the proposed hybrid directional parameterization is nearly parameter-free and can be effortlessly applied in any existing neural surface reconstruction method.
Related papers
- Normal-NeRF: Ambiguity-Robust Normal Estimation for Highly Reflective Scenes [11.515561389102174]
We introduce a transmittance-gradient-based normal estimation technique that remains robust even under ambiguous shape conditions.
Our proposed method achieves robust reconstruction and high-fidelity rendering of scenes featuring both highly specular reflections and intricate geometric structures.
arXiv Detail & Related papers (2025-01-16T10:42:29Z) - Shrinking: Reconstruction of Parameterized Surfaces from Signed Distance Fields [2.1638817206926855]
We propose a novel method for reconstructing explicit parameterized surfaces from Signed Distance Fields (SDFs)
Our approach iteratively contracts a parameterized initial sphere to conform to the target SDF shape, preserving differentiability and surface parameterization throughout.
This enables downstream applications such as texture mapping, geometry processing, animation, and finite element analysis.
arXiv Detail & Related papers (2024-10-04T03:39:15Z) - AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections [87.191742674543]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces, leading to the best overall performance.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction [3.1820300989695833]
This paper introduces a versatile paradigm for integrating multi-view reflectance and normal maps acquired through photometric stereo.
Our approach employs a pixel-wise joint re- parameterization of reflectance and normal, considering them as a vector of radiances rendered under simulated, varying illumination.
It significantly improves the detailed 3D reconstruction of areas with high curvature or low visibility.
arXiv Detail & Related papers (2023-12-02T19:49:27Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Multi-view 3D Reconstruction of a Texture-less Smooth Surface of Unknown
Generic Reflectance [86.05191217004415]
Multi-view reconstruction of texture-less objects with unknown surface reflectance is a challenging task.
This paper proposes a simple and robust solution to this problem based on a co-light scanner.
arXiv Detail & Related papers (2021-05-25T01:28:54Z) - UCLID-Net: Single View Reconstruction in Object Space [60.046383053211215]
We show that building a geometry preserving 3-dimensional latent space helps the network concurrently learn global shape regularities and local reasoning in the object coordinate space.
We demonstrate both on ShapeNet synthetic images, which are often used for benchmarking purposes, and on real-world images that our approach outperforms state-of-the-art ones.
arXiv Detail & Related papers (2020-06-06T09:15:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.