Learning Signed Distance Field for Multi-view Surface Reconstruction
- URL: http://arxiv.org/abs/2108.09964v1
- Date: Mon, 23 Aug 2021 06:23:50 GMT
- Title: Learning Signed Distance Field for Multi-view Surface Reconstruction
- Authors: Jingyang Zhang, Yao Yao, Long Quan
- Abstract summary: We introduce a novel neural surface reconstruction framework that leverages the knowledge of stereo matching and feature consistency.
We apply a signed distance field (SDF) and a surface light field to represent the scene geometry and appearance respectively.
Our method is able to improve the robustness of geometry estimation and support reconstruction of complex scene topologies.
- Score: 24.090786783370195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works on implicit neural representations have shown promising results
for multi-view surface reconstruction. However, most approaches are limited to
relatively simple geometries and usually require clean object masks for
reconstructing complex and concave objects. In this work, we introduce a novel
neural surface reconstruction framework that leverages the knowledge of stereo
matching and feature consistency to optimize the implicit surface
representation. More specifically, we apply a signed distance field (SDF) and a
surface light field to represent the scene geometry and appearance
respectively. The SDF is directly supervised by geometry from stereo matching,
and is refined by optimizing the multi-view feature consistency and the
fidelity of rendered images. Our method is able to improve the robustness of
geometry estimation and support reconstruction of complex scene topologies.
Extensive experiments have been conducted on DTU, EPFL and Tanks and Temples
datasets. Compared to previous state-of-the-art methods, our method achieves
better mesh reconstruction in wide open scenes without masks as input.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - Spurfies: Sparse Surface Reconstruction using Local Geometry Priors [8.260048622127913]
We introduce Spurfies, a novel method for sparse-view surface reconstruction.
It disentangles appearance and geometry information to utilize local geometry priors trained on synthetic data.
We validate our method on the DTU dataset and demonstrate that it outperforms previous state of the art by 35% in surface quality.
arXiv Detail & Related papers (2024-08-29T14:02:47Z) - NeuSurf: On-Surface Priors for Neural Surface Reconstruction from Sparse
Input Views [41.03837477483364]
We propose a novel sparse view reconstruction framework that leverages on-surface priors to achieve highly faithful surface reconstruction.
Specifically, we design several constraints on global geometry alignment and local geometry refinement for jointly optimizing coarse shapes and fine details.
The experimental results with DTU and BlendedMVS datasets in two prevalent sparse settings demonstrate significant improvements over the state-of-the-art methods.
arXiv Detail & Related papers (2023-12-21T16:04:45Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D
Reconstruction of Complex Scenes with Reflections [92.38975002642455]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - Dynamic Multi-View Scene Reconstruction Using Neural Implicit Surface [0.9134661726886928]
We propose a template-free method to reconstruct surface geometry and appearance using neural implicit representations from multi-view videos.
We leverage topology-aware deformation and the signed distance field to learn complex dynamic surfaces via differentiable volume rendering.
Experiments on different multi-view video datasets demonstrate that our method achieves high-fidelity surface reconstruction as well as photorealistic novel view synthesis.
arXiv Detail & Related papers (2023-02-28T19:47:30Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for
Multi-view Reconstruction [41.43563122590449]
We propose geometry-consistent neural implicit surfaces learning for multi-view reconstruction.
Our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions.
arXiv Detail & Related papers (2022-05-31T14:52:07Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.