NPBG++: Accelerating Neural Point-Based Graphics
- URL: http://arxiv.org/abs/2203.13318v1
- Date: Thu, 24 Mar 2022 19:59:39 GMT
- Title: NPBG++: Accelerating Neural Point-Based Graphics
- Authors: Ruslan Rakhimov, Andrei-Timotei Ardelean, Victor Lempitsky, Evgeny
Burnaev
- Abstract summary: NPBG++ is a novel view synthesis (NVS) task that achieves high rendering realism with low scene fitting time.
Our method efficiently leverages the multiview observations and the point cloud of a static scene to predict a neural descriptor for each point.
In our comparisons, the proposed system outperforms previous NVS approaches in terms of fitting and rendering runtimes while producing images of similar quality.
- Score: 14.366073496519139
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new system (NPBG++) for the novel view synthesis (NVS) task that
achieves high rendering realism with low scene fitting time. Our method
efficiently leverages the multiview observations and the point cloud of a
static scene to predict a neural descriptor for each point, improving upon the
pipeline of Neural Point-Based Graphics in several important ways. By
predicting the descriptors with a single pass through the source images, we
lift the requirement of per-scene optimization while also making the neural
descriptors view-dependent and more suitable for scenes with strong
non-Lambertian effects. In our comparisons, the proposed system outperforms
previous NVS approaches in terms of fitting and rendering runtimes while
producing images of similar quality.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - FlowIBR: Leveraging Pre-Training for Efficient Neural Image-Based Rendering of Dynamic Scenes [5.118560450410779]
FlowIBR is a novel approach for efficient monocular novel view of dynamic scenes.
It integrates a neural image-based rendering method, pre-trained on a large corpus of widely available static scenes, with a per-scene optimized scene flow field.
The proposed method reduces per-scene optimization time by an order of magnitude, achieving comparable rendering quality to existing methods.
arXiv Detail & Related papers (2023-09-11T12:35:17Z) - Scalable Neural Video Representations with Learnable Positional Features [73.51591757726493]
We show how to train neural representations with learnable positional features (NVP) that effectively amortize a video as latent codes.
We demonstrate the superiority of NVP on the popular UVG benchmark; compared with prior arts, NVP not only trains 2 times faster (less than 5 minutes) but also exceeds their encoding quality as 34.07rightarrow$34.57 (measured with the PSNR metric)
arXiv Detail & Related papers (2022-10-13T08:15:08Z) - Neural Mesh-Based Graphics [5.865500664175491]
We revisit NPBG, the popular approach to novel view synthesis that introduced the ubiquitous point feature neural paradigm.
We achieve this through a view mesh-based point descriptorization, in addition to a foreground/background scene rendering split, and an improved loss.
We also perform competitively with respect to the state-of-the-art method SVS, which has been trained on the full dataset.
arXiv Detail & Related papers (2022-08-10T09:18:28Z) - Differentiable Point-Based Radiance Fields for Efficient View Synthesis [57.56579501055479]
We propose a differentiable rendering algorithm for efficient novel view synthesis.
Our method is up to 300x faster than NeRF in both training and inference.
For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at near interactive rate.
arXiv Detail & Related papers (2022-05-28T04:36:13Z) - View Synthesis with Sculpted Neural Points [64.40344086212279]
Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency.
We propose a new approach that performs view synthesis using point clouds.
It is the first point-based method to achieve better visual quality than NeRF while being more than 100x faster in rendering speed.
arXiv Detail & Related papers (2022-05-12T03:54:35Z) - Point-Based Neural Rendering with Per-View Optimization [5.306819482496464]
We introduce a general approach that is with MVS, but allows further optimization of scene properties in the space of input views.
A key element of our approach is our new differentiable point-based pipeline.
We use these elements together in our neural splatting, that outperforms all previous methods both in quality and speed in almost all scenes we tested.
arXiv Detail & Related papers (2021-09-06T11:19:31Z) - Fast Training of Neural Lumigraph Representations using Meta Learning [109.92233234681319]
We develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time.
Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection.
We show that MetaNLR++ achieves similar or better photorealistic novel view synthesis results in a fraction of the time that competing methods require.
arXiv Detail & Related papers (2021-06-28T18:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.