Recovering Fine Details for Neural Implicit Surface Reconstruction
- URL: http://arxiv.org/abs/2211.11320v1
- Date: Mon, 21 Nov 2022 10:06:09 GMT
- Title: Recovering Fine Details for Neural Implicit Surface Reconstruction
- Authors: Decai Chen, Peng Zhang, Ingo Feldmann, Oliver Schreer, Peter Eisert
- Abstract summary: We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
- Score: 3.9702081347126943
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent works on implicit neural representations have made significant
strides. Learning implicit neural surfaces using volume rendering has gained
popularity in multi-view reconstruction without 3D supervision. However,
accurately recovering fine details is still challenging, due to the underlying
ambiguity of geometry and appearance representation. In this paper, we present
D-NeuS, a volume rendering-base neural implicit surface reconstruction method
capable to recover fine geometry details, which extends NeuS by two additional
loss functions targeting enhanced reconstruction quality. First, we encourage
the rendered surface points from alpha compositing to have zero signed distance
values, alleviating the geometry bias arising from transforming SDF to density
for volume rendering. Second, we impose multi-view feature consistency on the
surface points, derived by interpolating SDF zero-crossings from sampled points
along rays. Extensive quantitative and qualitative results demonstrate that our
method reconstructs high-accuracy surfaces with details, and outperforms the
state of the art.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction [63.85586195085141]
Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction.
We introduce NeuRodin, a novel two-stage neural surface reconstruction framework.
NeuRodin achieves high-fidelity surface reconstruction and retains the flexible optimization characteristics of density-based methods.
arXiv Detail & Related papers (2024-08-19T17:36:35Z) - NeuSD: Surface Completion with Multi-View Text-to-Image Diffusion [56.98287481620215]
We present a novel method for 3D surface reconstruction from multiple images where only a part of the object of interest is captured.
Our approach builds on two recent developments: surface reconstruction using neural radiance fields for the reconstruction of the visible parts of the surface, and guidance of pre-trained 2D diffusion models in the form of Score Distillation Sampling (SDS) to complete the shape in unobserved regions in a plausible manner.
arXiv Detail & Related papers (2023-12-07T19:30:55Z) - Depth-NeuS: Neural Implicit Surfaces Learning for Multi-view
Reconstruction Based on Depth Information Optimization [6.493546601668505]
Methods for neural surface representation and rendering, for example NeuS, have shown that learning neural implicit surfaces through volume rendering is becoming increasingly popular.
Existing methods lack a direct representation of depth information, which makes object reconstruction unrestricted by geometric features.
This is because existing methods only use surface normals to represent implicit surfaces without using depth information.
We propose a neural implicit surface learning method called Depth-NeuS based on depth information optimization for multi-view reconstruction.
arXiv Detail & Related papers (2023-03-30T01:19:27Z) - HR-NeuS: Recovering High-Frequency Surface Geometry via Neural Implicit
Surfaces [6.382138631957651]
We present High-Resolution NeuS, a novel neural implicit surface reconstruction method.
HR-NeuS recovers high-frequency surface geometry while maintaining large-scale reconstruction accuracy.
We demonstrate through experiments on DTU and BlendedMVS datasets that our approach produces 3D geometries that are qualitatively more detailed and quantitatively of similar accuracy compared to previous approaches.
arXiv Detail & Related papers (2023-02-14T02:25:16Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Improved surface reconstruction using high-frequency details [44.73668037810989]
We propose a novel method to improve the quality of surface reconstruction in neural rendering.
Our results show that our method can reconstruct high-frequency surface details and obtain better surface reconstruction quality than the current state of the art.
arXiv Detail & Related papers (2022-06-15T23:46:48Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.