NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction
- URL: http://arxiv.org/abs/2106.10689v1
- Date: Sun, 20 Jun 2021 12:59:42 GMT
- Title: NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction
- Authors: Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura,
Wenping Wang
- Abstract summary: We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
- Score: 88.02850205432763
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We present a novel neural surface reconstruction method, called NeuS, for
reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require
foreground mask as supervision, easily get trapped in local minima, and
therefore struggle with the reconstruction of objects with severe
self-occlusion or thin structures. Meanwhile, recent neural methods for novel
view synthesis, such as NeRF and its variants, use volume rendering to produce
a neural scene representation with robustness of optimization, even for highly
complex objects. However, extracting high-quality surfaces from this learned
implicit representation is difficult because there are not sufficient surface
constraints in the representation. In NeuS, we propose to represent a surface
as the zero-level set of a signed distance function (SDF) and develop a new
volume rendering method to train a neural SDF representation. We observe that
the conventional volume rendering method causes inherent geometric errors (i.e.
bias) for surface reconstruction, and therefore propose a new formulation that
is free of bias in the first order of approximation, thus leading to more
accurate surface reconstruction even without the mask supervision. Experiments
on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the
state-of-the-arts in high-quality surface reconstruction, especially for
objects and scenes with complex structures and self-occlusion.
Related papers
- Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy
Views [20.840876921128956]
This paper pays special attention on the consistent surface reconstruction from sparse views with noisy camera poses.
Unlike previous approaches, the key difference of this paper is to exploit the multi-view constraints directly from the explicit geometry of the neural surface.
We propose a jointly learning strategy for neural surface and camera poses, named SC-NeuS, to perform geometry-consistent surface reconstruction in an end-to-end manner.
arXiv Detail & Related papers (2023-07-12T03:45:45Z) - Looking Through the Glass: Neural Surface Reconstruction Against High
Specular Reflections [72.45512144682554]
We present a novel surface reconstruction framework, NeuS-HSR, based on implicit neural rendering.
In NeuS-HSR, the object surface is parameterized as an implicit signed distance function.
We show that NeuS-HSR outperforms state-of-the-art approaches for accurate and robust target surface reconstruction against HSR.
arXiv Detail & Related papers (2023-04-18T02:34:58Z) - Depth-NeuS: Neural Implicit Surfaces Learning for Multi-view
Reconstruction Based on Depth Information Optimization [6.493546601668505]
Methods for neural surface representation and rendering, for example NeuS, have shown that learning neural implicit surfaces through volume rendering is becoming increasingly popular.
Existing methods lack a direct representation of depth information, which makes object reconstruction unrestricted by geometric features.
This is because existing methods only use surface normals to represent implicit surfaces without using depth information.
We propose a neural implicit surface learning method called Depth-NeuS based on depth information optimization for multi-view reconstruction.
arXiv Detail & Related papers (2023-03-30T01:19:27Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.