S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit
Surfaces
- URL: http://arxiv.org/abs/2303.17712v2
- Date: Sun, 3 Sep 2023 03:02:38 GMT
- Title: S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit
Surfaces
- Authors: Haoyu Wu, Alexandros Graikos, Dimitris Samaras
- Abstract summary: Neural rendering of implicit surfaces performs well in 3D vision applications.
When only sparse input images are available, output quality drops significantly due to the shape-radiance ambiguity problem.
We propose to regularize neural rendering optimization with an MVS solution.
- Score: 75.30792581941789
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural rendering of implicit surfaces performs well in 3D vision
applications. However, it requires dense input views as supervision. When only
sparse input images are available, output quality drops significantly due to
the shape-radiance ambiguity problem. We note that this ambiguity can be
constrained when a 3D point is visible in multiple views, as is the case in
multi-view stereo (MVS). We thus propose to regularize neural rendering
optimization with an MVS solution. The use of an MVS probability volume and a
generalized cross entropy loss leads to a noise-tolerant optimization process.
In addition, neural rendering provides global consistency constraints that
guide the MVS depth hypothesis sampling and thus improves MVS performance.
Given only three sparse input views, experiments show that our method not only
outperforms generic neural rendering models by a large margin but also
significantly increases the reconstruction quality of MVS models. Project page:
https://hao-yu-wu.github.io/s-volsdf/.
Related papers
- MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views [27.47491233656671]
Novel View Synthesis (NVS) is a significant challenge in 3D vision applications.
We propose textbfMVPGS, a few-shot NVS method that excavates the multi-view priors based on 3D Gaussian Splatting.
Experiments show that the proposed method achieves state-of-the-art performance with real-time rendering speed.
arXiv Detail & Related papers (2024-09-22T05:07:20Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Neural Volume Super-Resolution [49.879789224455436]
We propose a neural super-resolution network that operates directly on the volumetric representation of the scene.
To realize our method, we devise a novel 3D representation that hinges on multiple 2D feature planes.
We validate the proposed method by super-resolving multi-view consistent views on a diverse set of unseen 3D scenes.
arXiv Detail & Related papers (2022-12-09T04:54:13Z) - Multi-View Photometric Stereo Revisited [100.97116470055273]
Multi-view photometric stereo (MVPS) is a preferred method for detailed and precise 3D acquisition of an object from images.
We present a simple, practical approach to MVPS, which works well for isotropic as well as other object material types such as anisotropic and glossy.
The proposed approach shows state-of-the-art results when tested extensively on several benchmark datasets.
arXiv Detail & Related papers (2022-10-14T09:46:15Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - Point-Based Neural Rendering with Per-View Optimization [5.306819482496464]
We introduce a general approach that is with MVS, but allows further optimization of scene properties in the space of input views.
A key element of our approach is our new differentiable point-based pipeline.
We use these elements together in our neural splatting, that outperforms all previous methods both in quality and speed in almost all scenes we tested.
arXiv Detail & Related papers (2021-09-06T11:19:31Z) - SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view
Stereopsis [52.35697180864202]
Multi-view stereopsis (MVS) tries to recover the 3D model from 2D images.
We investigate sparse-MVS with large baseline angles since the sparser sensation is more practical and more cost-efficient.
We present SurfaceNet+, a volumetric method to handle the 'incompleteness' and the 'inaccuracy' problems induced by a very sparse MVS setup.
arXiv Detail & Related papers (2020-05-26T13:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.