SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy
Views
- URL: http://arxiv.org/abs/2307.05892v1
- Date: Wed, 12 Jul 2023 03:45:45 GMT
- Title: SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy
Views
- Authors: Shi-Sheng Huang, Zi-Xin Zou, Yi-Chi Zhang, Hua Huang
- Abstract summary: This paper pays special attention on the consistent surface reconstruction from sparse views with noisy camera poses.
Unlike previous approaches, the key difference of this paper is to exploit the multi-view constraints directly from the explicit geometry of the neural surface.
We propose a jointly learning strategy for neural surface and camera poses, named SC-NeuS, to perform geometry-consistent surface reconstruction in an end-to-end manner.
- Score: 20.840876921128956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent neural surface reconstruction by volume rendering approaches have
made much progress by achieving impressive surface reconstruction quality, but
are still limited to dense and highly accurate posed views. To overcome such
drawbacks, this paper pays special attention on the consistent surface
reconstruction from sparse views with noisy camera poses. Unlike previous
approaches, the key difference of this paper is to exploit the multi-view
constraints directly from the explicit geometry of the neural surface, which
can be used as effective regularization to jointly learn the neural surface and
refine the camera poses. To build effective multi-view constraints, we
introduce a fast differentiable on-surface intersection to generate on-surface
points, and propose view-consistent losses based on such differentiable points
to regularize the neural surface learning. Based on this point, we propose a
jointly learning strategy for neural surface and camera poses, named SC-NeuS,
to perform geometry-consistent surface reconstruction in an end-to-end manner.
With extensive evaluation on public datasets, our SC-NeuS can achieve
consistently better surface reconstruction results with fine-grained details
than previous state-of-the-art neural surface reconstruction approaches,
especially from sparse and noisy camera views.
Related papers
- NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction [63.85586195085141]
Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction.
We introduce NeuRodin, a novel two-stage neural surface reconstruction framework.
NeuRodin achieves high-fidelity surface reconstruction and retains the flexible optimization characteristics of density-based methods.
arXiv Detail & Related papers (2024-08-19T17:36:35Z) - Improving Neural Surface Reconstruction with Feature Priors from Multi-View Image [87.00660347447494]
Recent advancements in Neural Surface Reconstruction (NSR) have significantly improved multi-view reconstruction when coupled with volume rendering.
We propose an investigation into feature-level consistent loss, aiming to harness valuable feature priors from diverse pretext visual tasks.
Our results, analyzed on DTU and EPFL, reveal that feature priors from image matching and multi-view stereo datasets outperform other pretext tasks.
arXiv Detail & Related papers (2024-08-04T16:09:46Z) - PSDF: Prior-Driven Neural Implicit Surface Learning for Multi-view
Reconstruction [31.768161784030923]
The framework PSDF is proposed which resorts to external geometric priors from a pretrained MVS network and internal geometric priors inherent in the NISR model.
Experiments on the Tanks and Temples dataset show that PSDF achieves state-of-the-art performance on complex uncontrolled scenes.
arXiv Detail & Related papers (2024-01-23T13:30:43Z) - NeuSurf: On-Surface Priors for Neural Surface Reconstruction from Sparse
Input Views [41.03837477483364]
We propose a novel sparse view reconstruction framework that leverages on-surface priors to achieve highly faithful surface reconstruction.
Specifically, we design several constraints on global geometry alignment and local geometry refinement for jointly optimizing coarse shapes and fine details.
The experimental results with DTU and BlendedMVS datasets in two prevalent sparse settings demonstrate significant improvements over the state-of-the-art methods.
arXiv Detail & Related papers (2023-12-21T16:04:45Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - Depth-NeuS: Neural Implicit Surfaces Learning for Multi-view
Reconstruction Based on Depth Information Optimization [6.493546601668505]
Methods for neural surface representation and rendering, for example NeuS, have shown that learning neural implicit surfaces through volume rendering is becoming increasingly popular.
Existing methods lack a direct representation of depth information, which makes object reconstruction unrestricted by geometric features.
This is because existing methods only use surface normals to represent implicit surfaces without using depth information.
We propose a neural implicit surface learning method called Depth-NeuS based on depth information optimization for multi-view reconstruction.
arXiv Detail & Related papers (2023-03-30T01:19:27Z) - SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse
views [40.7986573030214]
We introduce SparseNeuS, a novel neural rendering based method for the task of surface reconstruction from multi-view images.
SparseNeuS can generalize to new scenes and work well with sparse images (as few as 2 or 3)
arXiv Detail & Related papers (2022-06-12T13:34:03Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.