BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion
- URL: http://arxiv.org/abs/2204.01139v1
- Date: Sun, 3 Apr 2022 19:33:09 GMT
- Title: BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion
- Authors: Kejie Li, Yansong Tang, Victor Adrian Prisacariu, Philip H.S. Torr
- Abstract summary: We present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent advances in neural implicit representations and neural rendering for dense 3D reconstruction.
In order to incrementally integrate new depth maps into a global neural implicit representation, we propose a novel bi-level fusion strategy.
We evaluate the proposed method on multiple datasets quantitatively and qualitatively, demonstrating a significant improvement over existing methods.
- Score: 85.24673400250671
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dense 3D reconstruction from a stream of depth images is the key to many
mixed reality and robotic applications. Although methods based on Truncated
Signed Distance Function (TSDF) Fusion have advanced the field over the years,
the TSDF volume representation is confronted with striking a balance between
the robustness to noisy measurements and maintaining the level of detail. We
present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent
advances in neural implicit representations and neural rendering for dense 3D
reconstruction. In order to incrementally integrate new depth maps into a
global neural implicit representation, we propose a novel bi-level fusion
strategy that considers both efficiency and reconstruction quality by design.
We evaluate the proposed method on multiple datasets quantitatively and
qualitatively, demonstrating a significant improvement over existing methods.
Related papers
- Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - C2F2NeUS: Cascade Cost Frustum Fusion for High Fidelity and
Generalizable Neural Surface Reconstruction [12.621233209149953]
We introduce a novel integration scheme that combines the multi-view stereo with neural signed distance function representations.
Our method reconstructs robust surfaces and outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2023-06-16T17:56:16Z) - VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction [64.09702079593372]
VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
arXiv Detail & Related papers (2022-12-15T18:59:54Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Sphere-Guided Training of Neural Implicit Surfaces [14.882607960908217]
In 3D reconstruction, neural distance functions trained via ray marching have been widely adopted for multi-view 3D reconstruction.
These methods, however, apply the ray marching procedure for the entire scene volume, leading to reduced sampling efficiency.
We address this problem via joint training of the implicit function and our new coarse sphere-based surface reconstruction.
arXiv Detail & Related papers (2022-09-30T15:00:03Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric
Mapping [29.3378360000956]
We present a novel 3D mapping method leveraging the recent progress in neural implicit representation for 3D reconstruction.
We propose a fusion strategy and training pipeline to incrementally build and update neural implicit representations.
We show that incrementally built occupancy maps can be obtained in real-time even on a CPU.
arXiv Detail & Related papers (2021-10-18T15:45:05Z) - VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction [71.83308989022635]
In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results.
Our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume.
In order to improve the matching performance between images acquired from very different viewpoints, we introduce a rotation-invariant 3D convolution kernel called PosedConv.
arXiv Detail & Related papers (2021-08-19T11:33:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.