NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view
Reconstruction
- URL: http://arxiv.org/abs/2212.05231v3
- Date: Thu, 16 Nov 2023 22:00:04 GMT
- Title: NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view
Reconstruction
- Authors: Yiming Wang, Qin Han, Marc Habermann, Kostas Daniilidis, Christian
Theobalt, Lingjie Liu
- Abstract summary: We propose a fast neural surface reconstruction approach, called NeuS2.
NeuS2 achieves two orders of magnitude improvement in terms of acceleration without compromising reconstruction quality.
We extend our method for fast training of dynamic scenes, with a proposed incremental training strategy and a novel global transformation prediction component.
- Score: 95.37644907940857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent methods for neural surface representation and rendering, for example
NeuS, have demonstrated the remarkably high-quality reconstruction of static
scenes. However, the training of NeuS takes an extremely long time (8 hours),
which makes it almost impossible to apply them to dynamic scenes with thousands
of frames. We propose a fast neural surface reconstruction approach, called
NeuS2, which achieves two orders of magnitude improvement in terms of
acceleration without compromising reconstruction quality. To accelerate the
training process, we parameterize a neural surface representation by
multi-resolution hash encodings and present a novel lightweight calculation of
second-order derivatives tailored to our networks to leverage CUDA parallelism,
achieving a factor two speed up. To further stabilize and expedite training, a
progressive learning strategy is proposed to optimize multi-resolution hash
encodings from coarse to fine. We extend our method for fast training of
dynamic scenes, with a proposed incremental training strategy and a novel
global transformation prediction component, which allow our method to handle
challenging long sequences with large movements and deformations. Our
experiments on various datasets demonstrate that NeuS2 significantly
outperforms the state-of-the-arts in both surface reconstruction accuracy and
training speed for both static and dynamic scenes. The code is available at our
website: https://vcai.mpi-inf.mpg.de/projects/NeuS2/ .
Related papers
- Adaptive and Temporally Consistent Gaussian Surfels for Multi-view Dynamic Reconstruction [3.9363268745580426]
AT-GS is a novel method for reconstructing high-quality dynamic surfaces from multi-view videos through per-frame incremental optimization.
We reduce temporal jittering in dynamic surfaces by ensuring consistency in curvature maps across consecutive frames.
Our method achieves superior accuracy and temporal coherence in dynamic surface reconstruction, delivering high-fidelity space-time novel view synthesis.
arXiv Detail & Related papers (2024-11-10T21:30:16Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - FastMESH: Fast Surface Reconstruction by Hexagonal Mesh-based Neural
Rendering [8.264851594332677]
We propose an effective mesh-based neural rendering approach, named FastMESH, which only samples at the intersection of ray and mesh.
Experiments demonstrate that our approach achieves the state-of-the-art results on both reconstruction and novel view synthesis.
arXiv Detail & Related papers (2023-05-29T02:43:14Z) - Temporal Interpolation Is All You Need for Dynamic Neural Radiance
Fields [4.863916681385349]
We propose a method to train neural fields of dynamic scenes based on temporal vectors of feature.
In the neural representation, we extract from space-time inputs via multiple neural network modules and interpolate them based on time frames.
In the grid representation, space-time features are learned via four-dimensional hash grids, which remarkably reduces training time.
arXiv Detail & Related papers (2023-02-18T12:01:23Z) - Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction [142.61256012419562]
We present Voxurf, a voxel-based surface reconstruction approach that is both efficient and accurate.
Voxurf addresses the aforementioned issues via several key designs, including 1) a two-stage training procedure that attains a coherent coarse shape and recovers fine details successively, 2) a dual color network that maintains color-geometry dependency, and 3) a hierarchical geometry feature to encourage information propagation across voxels.
arXiv Detail & Related papers (2022-08-26T14:48:02Z) - Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis [63.25919018001152]
We propose a fast deformable radiance field method to handle dynamic scenes.
Our method achieves comparable performance to D-NeRF using only 20 minutes for training.
arXiv Detail & Related papers (2022-06-15T17:49:08Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - Neural Adaptive SCEne Tracing [24.781844909539686]
We present NAScenT, the first neural rendering method based on directly training a hybrid explicit-implicit neural representation.
NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments.
arXiv Detail & Related papers (2022-02-28T10:27:23Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.