Point-Based Neural Rendering with Per-View Optimization
- URL: http://arxiv.org/abs/2109.02369v2
- Date: Wed, 8 Sep 2021 08:46:43 GMT
- Title: Point-Based Neural Rendering with Per-View Optimization
- Authors: Georgios Kopanas, Julien Philip, Thomas Leimk\"uhler, George Drettakis
- Abstract summary: We introduce a general approach that is with MVS, but allows further optimization of scene properties in the space of input views.
A key element of our approach is our new differentiable point-based pipeline.
We use these elements together in our neural splatting, that outperforms all previous methods both in quality and speed in almost all scenes we tested.
- Score: 5.306819482496464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has recently been great interest in neural rendering methods. Some
approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but
cannot recover from the errors of this process, while others directly learn a
volumetric neural representation, but suffer from expensive training and
inference. We introduce a general approach that is initialized with MVS, but
allows further optimization of scene properties in the space of input views,
including depth and reprojected features, resulting in improved novel-view
synthesis. A key element of our approach is our new differentiable point-based
pipeline, based on bi-directional Elliptical Weighted Average splatting, a
probabilistic depth test and effective camera selection. We use these elements
together in our neural renderer, that outperforms all previous methods both in
quality and speed in almost all scenes we tested. Our pipeline can be applied
to multi-view harmonization and stylization in addition to novel-view
synthesis.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - NeuManifold: Neural Watertight Manifold Reconstruction with Efficient
and High-Quality Rendering Support [45.68296352822415]
We present a method for generating high-quality watertight manifold meshes from multi-view input images.
Our method combines the benefits of both worlds; we take the geometry obtained from neural fields, and further optimize the geometry as well as a compact neural texture representation.
arXiv Detail & Related papers (2023-05-26T17:59:21Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - Multi-View Photometric Stereo Revisited [100.97116470055273]
Multi-view photometric stereo (MVPS) is a preferred method for detailed and precise 3D acquisition of an object from images.
We present a simple, practical approach to MVPS, which works well for isotropic as well as other object material types such as anisotropic and glossy.
The proposed approach shows state-of-the-art results when tested extensively on several benchmark datasets.
arXiv Detail & Related papers (2022-10-14T09:46:15Z) - ProbNVS: Fast Novel View Synthesis with Learned Probability-Guided
Sampling [42.37704606186928]
We propose to build a novel view synthesis framework based on learned MVS priors.
We show that our method achieves 15 to 40 times faster rendering compared to state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-07T14:45:42Z) - NPBG++: Accelerating Neural Point-Based Graphics [14.366073496519139]
NPBG++ is a novel view synthesis (NVS) task that achieves high rendering realism with low scene fitting time.
Our method efficiently leverages the multiview observations and the point cloud of a static scene to predict a neural descriptor for each point.
In our comparisons, the proposed system outperforms previous NVS approaches in terms of fitting and rendering runtimes while producing images of similar quality.
arXiv Detail & Related papers (2022-03-24T19:59:39Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.