Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction
- URL: http://arxiv.org/abs/2208.12697v5
- Date: Sun, 13 Aug 2023 15:52:52 GMT
- Title: Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction
- Authors: Tong Wu, Jiaqi Wang, Xingang Pan, Xudong Xu, Christian Theobalt, Ziwei
Liu, Dahua Lin
- Abstract summary: We present Voxurf, a voxel-based surface reconstruction approach that is both efficient and accurate.
Voxurf addresses the aforementioned issues via several key designs, including 1) a two-stage training procedure that attains a coherent coarse shape and recovers fine details successively, 2) a dual color network that maintains color-geometry dependency, and 3) a hierarchical geometry feature to encourage information propagation across voxels.
- Score: 142.61256012419562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural surface reconstruction aims to reconstruct accurate 3D surfaces based
on multi-view images. Previous methods based on neural volume rendering mostly
train a fully implicit model with MLPs, which typically require hours of
training for a single scene. Recent efforts explore the explicit volumetric
representation to accelerate the optimization via memorizing significant
information with learnable voxel grids. However, existing voxel-based methods
often struggle in reconstructing fine-grained geometry, even when combined with
an SDF-based volume rendering scheme. We reveal that this is because 1) the
voxel grids tend to break the color-geometry dependency that facilitates
fine-geometry learning, and 2) the under-constrained voxel grids lack spatial
coherence and are vulnerable to local minima. In this work, we present Voxurf,
a voxel-based surface reconstruction approach that is both efficient and
accurate. Voxurf addresses the aforementioned issues via several key designs,
including 1) a two-stage training procedure that attains a coherent coarse
shape and recovers fine details successively, 2) a dual color network that
maintains color-geometry dependency, and 3) a hierarchical geometry feature to
encourage information propagation across voxels. Extensive experiments show
that Voxurf achieves high efficiency and high quality at the same time. On the
DTU benchmark, Voxurf achieves higher reconstruction quality with a 20x
training speedup compared to previous fully implicit methods. Our code is
available at https://github.com/wutong16/Voxurf.
Related papers
- NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction [63.85586195085141]
Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction.
We introduce NeuRodin, a novel two-stage neural surface reconstruction framework.
NeuRodin achieves high-fidelity surface reconstruction and retains the flexible optimization characteristics of density-based methods.
arXiv Detail & Related papers (2024-08-19T17:36:35Z) - VoxNeuS: Enhancing Voxel-Based Neural Surface Reconstruction via Gradient Interpolation [10.458776364195796]
We propose VoxNeuS, a lightweight surface reconstruction method for computational and memory efficient neural surface reconstruction.
The entire training process takes 15 minutes and less than 3 GB of memory on a single 2080ti GPU.
arXiv Detail & Related papers (2024-06-11T11:26:27Z) - HVOFusion: Incremental Mesh Reconstruction Using Hybrid Voxel Octree [12.180935725861723]
We propose a novel hybrid voxel-octree approach to fuse octree with voxel structures.
Such sparse structure preserves triangular faces in the leaf nodes and produces partial meshes sequentially for incremental reconstruction.
Experimental results on several datasets show that our proposed approach is capable of quickly and accurately reconstructing a scene with realistic colors.
arXiv Detail & Related papers (2024-04-27T18:24:53Z) - PR-NeuS: A Prior-based Residual Learning Paradigm for Fast Multi-view
Neural Surface Reconstruction [45.34454245176438]
We propose a prior-based residual learning paradigm for fast multi-view neural surface reconstruction.
Our method only takes about 3 minutes to reconstruct the surface of a single scene, while achieving competitive surface quality.
arXiv Detail & Related papers (2023-12-18T09:24:44Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for
Enhanced Indoor View Synthesis [51.49008959209671]
We introduce VoxNeRF, a novel approach that leverages volumetric representations to enhance the quality and efficiency of indoor view synthesis.
We employ multi-resolution hash grids to adaptively capture spatial features, effectively managing occlusions and the intricate geometry of indoor scenes.
We validate our approach against three public indoor datasets and demonstrate that VoxNeRF outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.