SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse
views
- URL: http://arxiv.org/abs/2206.05737v1
- Date: Sun, 12 Jun 2022 13:34:03 GMT
- Title: SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse
views
- Authors: Xiaoxiao Long, Cheng Lin, Peng Wang, Taku Komura, Wenping Wang
- Abstract summary: We introduce SparseNeuS, a novel neural rendering based method for the task of surface reconstruction from multi-view images.
SparseNeuS can generalize to new scenes and work well with sparse images (as few as 2 or 3)
- Score: 40.7986573030214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce SparseNeuS, a novel neural rendering based method for the task
of surface reconstruction from multi-view images. This task becomes more
difficult when only sparse images are provided as input, a scenario where
existing neural reconstruction approaches usually produce incomplete or
distorted results. Moreover, their inability of generalizing to unseen new
scenes impedes their application in practice. Contrarily, SparseNeuS can
generalize to new scenes and work well with sparse images (as few as 2 or 3).
SparseNeuS adopts signed distance function (SDF) as the surface representation,
and learns generalizable priors from image features by introducing geometry
encoding volumes for generic surface prediction. Moreover, several strategies
are introduced to effectively leverage sparse views for high-quality
reconstruction, including 1) a multi-level geometry reasoning framework to
recover the surfaces in a coarse-to-fine manner; 2) a multi-scale color
blending scheme for more reliable color prediction; 3) a consistency-aware
fine-tuning scheme to control the inconsistent regions caused by occlusion and
noise. Extensive experiments demonstrate that our approach not only outperforms
the state-of-the-art methods, but also exhibits good efficiency,
generalizability, and flexibility.
Related papers
- PVP-Recon: Progressive View Planning via Warping Consistency for Sparse-View Surface Reconstruction [49.7580491592023]
We propose PVP-Recon, a novel and effective sparse-view surface reconstruction method.
PVP-Recon starts initial surface reconstruction with as few as 3 views and progressively adds new views.
This progressive view planning progress is interleaved with a neural SDF-based reconstruction module.
arXiv Detail & Related papers (2024-09-09T10:06:34Z) - Learning Robust Generalizable Radiance Field with Visibility and Feature
Augmented Point Representation [7.203073346844801]
This paper introduces a novel paradigm for the generalizable neural radiance field (NeRF)
We propose the first paradigm that constructs the generalizable neural field based on point-based rather than image-based rendering.
Our approach explicitly models visibilities by geometric priors and augments them with neural features.
arXiv Detail & Related papers (2024-01-25T17:58:51Z) - NeuSurf: On-Surface Priors for Neural Surface Reconstruction from Sparse
Input Views [41.03837477483364]
We propose a novel sparse view reconstruction framework that leverages on-surface priors to achieve highly faithful surface reconstruction.
Specifically, we design several constraints on global geometry alignment and local geometry refinement for jointly optimizing coarse shapes and fine details.
The experimental results with DTU and BlendedMVS datasets in two prevalent sparse settings demonstrate significant improvements over the state-of-the-art methods.
arXiv Detail & Related papers (2023-12-21T16:04:45Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy
Views [20.840876921128956]
This paper pays special attention on the consistent surface reconstruction from sparse views with noisy camera poses.
Unlike previous approaches, the key difference of this paper is to exploit the multi-view constraints directly from the explicit geometry of the neural surface.
We propose a jointly learning strategy for neural surface and camera poses, named SC-NeuS, to perform geometry-consistent surface reconstruction in an end-to-end manner.
arXiv Detail & Related papers (2023-07-12T03:45:45Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.