Sphere-Guided Training of Neural Implicit Surfaces
- URL: http://arxiv.org/abs/2209.15511v2
- Date: Thu, 13 Apr 2023 13:03:58 GMT
- Title: Sphere-Guided Training of Neural Implicit Surfaces
- Authors: Andreea Dogaru, Andrei Timotei Ardelean, Savva Ignatyev, Egor
Zakharov, Evgeny Burnaev
- Abstract summary: In 3D reconstruction, neural distance functions trained via ray marching have been widely adopted for multi-view 3D reconstruction.
These methods, however, apply the ray marching procedure for the entire scene volume, leading to reduced sampling efficiency.
We address this problem via joint training of the implicit function and our new coarse sphere-based surface reconstruction.
- Score: 14.882607960908217
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In recent years, neural distance functions trained via volumetric ray
marching have been widely adopted for multi-view 3D reconstruction. These
methods, however, apply the ray marching procedure for the entire scene volume,
leading to reduced sampling efficiency and, as a result, lower reconstruction
quality in the areas of high-frequency details. In this work, we address this
problem via joint training of the implicit function and our new coarse
sphere-based surface reconstruction. We use the coarse representation to
efficiently exclude the empty volume of the scene from the volumetric ray
marching procedure without additional forward passes of the neural surface
network, which leads to an increased fidelity of the reconstructions compared
to the base systems. We evaluate our approach by incorporating it into the
training procedures of several implicit surface modeling methods and observe
uniform improvements across both synthetic and real-world datasets. Our
codebase can be accessed via the project page:
https://andreeadogaru.github.io/SphereGuided
Related papers
- Neural Kernel Surface Reconstruction [80.51581494300423]
We present a novel method for reconstructing a 3D implicit surface from a large-scale, sparse, and noisy point cloud.
Our approach builds upon the recently introduced Neural Kernel Fields representation.
arXiv Detail & Related papers (2023-05-31T06:25:18Z) - VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction [64.09702079593372]
VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
arXiv Detail & Related papers (2022-12-15T18:59:54Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - Improved surface reconstruction using high-frequency details [44.73668037810989]
We propose a novel method to improve the quality of surface reconstruction in neural rendering.
Our results show that our method can reconstruct high-frequency surface details and obtain better surface reconstruction quality than the current state of the art.
arXiv Detail & Related papers (2022-06-15T23:46:48Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion [85.24673400250671]
We present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent advances in neural implicit representations and neural rendering for dense 3D reconstruction.
In order to incrementally integrate new depth maps into a global neural implicit representation, we propose a novel bi-level fusion strategy.
We evaluate the proposed method on multiple datasets quantitatively and qualitatively, demonstrating a significant improvement over existing methods.
arXiv Detail & Related papers (2022-04-03T19:33:09Z) - NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric
Mapping [29.3378360000956]
We present a novel 3D mapping method leveraging the recent progress in neural implicit representation for 3D reconstruction.
We propose a fusion strategy and training pipeline to incrementally build and update neural implicit representations.
We show that incrementally built occupancy maps can be obtained in real-time even on a CPU.
arXiv Detail & Related papers (2021-10-18T15:45:05Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.