VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction
- URL: http://arxiv.org/abs/2212.08067v2
- Date: Mon, 3 Apr 2023 06:54:50 GMT
- Title: VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction
- Authors: Yufan Ren, Fangjinhua Wang, Tong Zhang, Marc Pollefeys and Sabine
S\"usstrunk
- Abstract summary: VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
- Score: 64.09702079593372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of the Neural Radiance Fields (NeRF) in novel view synthesis has
inspired researchers to propose neural implicit scene reconstruction. However,
most existing neural implicit reconstruction methods optimize per-scene
parameters and therefore lack generalizability to new scenes. We introduce
VolRecon, a novel generalizable implicit reconstruction method with Signed Ray
Distance Function (SRDF). To reconstruct the scene with fine details and little
noise, VolRecon combines projection features aggregated from multi-view
features, and volume features interpolated from a coarse global feature volume.
Using a ray transformer, we compute SRDF values of sampled points on a ray and
then render color and depth. On DTU dataset, VolRecon outperforms SparseNeuS by
about 30% in sparse view reconstruction and achieves comparable accuracy as
MVSNet in full view reconstruction. Furthermore, our approach exhibits good
generalization performance on the large-scale ETH3D benchmark.
Related papers
- PVP-Recon: Progressive View Planning via Warping Consistency for Sparse-View Surface Reconstruction [49.7580491592023]
We propose PVP-Recon, a novel and effective sparse-view surface reconstruction method.
PVP-Recon starts initial surface reconstruction with as few as 3 views and progressively adds new views.
This progressive view planning progress is interleaved with a neural SDF-based reconstruction module.
arXiv Detail & Related papers (2024-09-09T10:06:34Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D
Reconstruction of Complex Scenes with Reflections [92.38975002642455]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction [3.1820300989695833]
This paper introduces a versatile paradigm for integrating multi-view reflectance and normal maps acquired through photometric stereo.
Our approach employs a pixel-wise joint re- parameterization of reflectance and normal, considering them as a vector of radiances rendered under simulated, varying illumination.
It significantly improves the detailed 3D reconstruction of areas with high curvature or low visibility.
arXiv Detail & Related papers (2023-12-02T19:49:27Z) - ReTR: Modeling Rendering Via Transformer for Generalizable Neural
Surface Reconstruction [24.596408773471477]
Reconstruction TRansformer (ReTR) is a novel framework that leverages the transformer architecture to the rendering process.
By operating within a high-dimensional feature space rather than the color space, ReTR mitigates sensitivity to projected colors in source views.
arXiv Detail & Related papers (2023-05-30T08:25:23Z) - Sphere-Guided Training of Neural Implicit Surfaces [14.882607960908217]
In 3D reconstruction, neural distance functions trained via ray marching have been widely adopted for multi-view 3D reconstruction.
These methods, however, apply the ray marching procedure for the entire scene volume, leading to reduced sampling efficiency.
We address this problem via joint training of the implicit function and our new coarse sphere-based surface reconstruction.
arXiv Detail & Related papers (2022-09-30T15:00:03Z) - BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion [85.24673400250671]
We present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent advances in neural implicit representations and neural rendering for dense 3D reconstruction.
In order to incrementally integrate new depth maps into a global neural implicit representation, we propose a novel bi-level fusion strategy.
We evaluate the proposed method on multiple datasets quantitatively and qualitatively, demonstrating a significant improvement over existing methods.
arXiv Detail & Related papers (2022-04-03T19:33:09Z) - PERF: Performant, Explicit Radiance Fields [1.933681537640272]
We present a novel way of approaching image-based 3D reconstruction based on radiance fields.
The problem of volumetric reconstruction is formulated as a non-linear least-squares problem and solved explicitly without the use of neural networks.
arXiv Detail & Related papers (2021-12-10T15:29:00Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.