PR-NeuS: A Prior-based Residual Learning Paradigm for Fast Multi-view
Neural Surface Reconstruction
- URL: http://arxiv.org/abs/2312.11577v1
- Date: Mon, 18 Dec 2023 09:24:44 GMT
- Title: PR-NeuS: A Prior-based Residual Learning Paradigm for Fast Multi-view
Neural Surface Reconstruction
- Authors: Jianyao Xu, Qingshan Xu, Xinyao Liao, Wanjuan Su, Chen Zhang, Yew-Soon
Ong, Wenbing Tao
- Abstract summary: We propose a prior-based residual learning paradigm for fast multi-view neural surface reconstruction.
Our method only takes about 3 minutes to reconstruct the surface of a single scene, while achieving competitive surface quality.
- Score: 45.34454245176438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural surfaces learning has shown impressive performance in multi-view
surface reconstruction. However, most existing methods use large multilayer
perceptrons (MLPs) to train their models from scratch, resulting in hours of
training for a single scene. Recently, how to accelerate the neural surfaces
learning has received a lot of attention and remains an open problem. In this
work, we propose a prior-based residual learning paradigm for fast multi-view
neural surface reconstruction. This paradigm consists of two optimization
stages. In the first stage, we propose to leverage generalization models to
generate a basis signed distance function (SDF) field. This initial field can
be quickly obtained by fusing multiple local SDF fields produced by
generalization models. This provides a coarse global geometry prior. Based on
this prior, in the second stage, a fast residual learning strategy based on
hash-encoding networks is proposed to encode an offset SDF field for the basis
SDF field. Moreover, we introduce a prior-guided sampling scheme to help the
residual learning stage converge better, and thus recover finer structures.
With our designed paradigm, experimental results show that our method only
takes about 3 minutes to reconstruct the surface of a single scene, while
achieving competitive surface quality. Our code will be released upon
publication.
Related papers
- $R^2$-Mesh: Reinforcement Learning Powered Mesh Reconstruction via Geometry and Appearance Refinement [5.810659946867557]
Mesh reconstruction based on Neural Radiance Fields (NeRF) is popular in a variety of applications such as computer graphics, virtual reality, and medical imaging.
We propose a novel algorithm that progressively generates and optimize meshes from multi-view images.
Our method delivers highly competitive and robust performance in both mesh rendering quality and geometric quality.
arXiv Detail & Related papers (2024-08-19T16:33:17Z) - Fine Structure-Aware Sampling: A New Sampling Training Scheme for
Pixel-Aligned Implicit Models in Single-View Human Reconstruction [105.46091601932524]
We introduce Fine Structured-Aware Sampling (FSS) to train pixel-aligned implicit models for single-view human reconstruction.
FSS proactively adapts to the thickness and complexity of surfaces.
It also proposes a mesh thickness loss signal for pixel-aligned implicit models.
arXiv Detail & Related papers (2024-02-29T14:26:46Z) - FastMESH: Fast Surface Reconstruction by Hexagonal Mesh-based Neural
Rendering [8.264851594332677]
We propose an effective mesh-based neural rendering approach, named FastMESH, which only samples at the intersection of ray and mesh.
Experiments demonstrate that our approach achieves the state-of-the-art results on both reconstruction and novel view synthesis.
arXiv Detail & Related papers (2023-05-29T02:43:14Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction [142.61256012419562]
We present Voxurf, a voxel-based surface reconstruction approach that is both efficient and accurate.
Voxurf addresses the aforementioned issues via several key designs, including 1) a two-stage training procedure that attains a coherent coarse shape and recovers fine details successively, 2) a dual color network that maintains color-geometry dependency, and 3) a hierarchical geometry feature to encourage information propagation across voxels.
arXiv Detail & Related papers (2022-08-26T14:48:02Z) - Critical Regularizations for Neural Surface Reconstruction in the Wild [26.460011241432092]
We present RegSDF, which shows that proper point cloud supervisions and geometry regularizations are sufficient to produce high-quality and robust reconstruction results.
RegSDF is able to reconstruct surfaces with fine details even for open scenes with complex topologies and unstructured camera trajectories.
arXiv Detail & Related papers (2022-06-07T08:11:22Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Learning Signed Distance Field for Multi-view Surface Reconstruction [24.090786783370195]
We introduce a novel neural surface reconstruction framework that leverages the knowledge of stereo matching and feature consistency.
We apply a signed distance field (SDF) and a surface light field to represent the scene geometry and appearance respectively.
Our method is able to improve the robustness of geometry estimation and support reconstruction of complex scene topologies.
arXiv Detail & Related papers (2021-08-23T06:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.