View Synthesis with Sculpted Neural Points
- URL: http://arxiv.org/abs/2205.05869v1
- Date: Thu, 12 May 2022 03:54:35 GMT
- Title: View Synthesis with Sculpted Neural Points
- Authors: Yiming Zuo, Jia Deng
- Abstract summary: Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency.
We propose a new approach that performs view synthesis using point clouds.
It is the first point-based method to achieve better visual quality than NeRF while being more than 100x faster in rendering speed.
- Score: 64.40344086212279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the task of view synthesis, which can be posed as recovering a
rendering function that renders new views from a set of existing images. In
many recent works such as NeRF, this rendering function is parameterized using
implicit neural representations of scene geometry. Implicit neural
representations have achieved impressive visual quality but have drawbacks in
computational efficiency. In this work, we propose a new approach that performs
view synthesis using point clouds. It is the first point-based method to
achieve better visual quality than NeRF while being more than 100x faster in
rendering speed. Our approach builds on existing works on differentiable
point-based rendering but introduces a novel technique we call "Sculpted Neural
Points (SNP)", which significantly improves the robustness to errors and holes
in the reconstructed point cloud. Experiments show that on the task of view
synthesis, our sculpting technique closes the gap between point-based and
implicit representation-based methods. Code is available at
https://github.com/princeton-vl/SNP and supplementary video at
https://youtu.be/dBwCQP9uNws.
Related papers
- Few-shot Novel View Synthesis using Depth Aware 3D Gaussian Splatting [0.0]
3D Gaussian splatting has surpassed neural radiance field methods in novel view synthesis.
It produces a high-quality rendering with a lot of input views, but its performance drops significantly when only a few views are available.
We propose a depth-aware Gaussian splatting method for few-shot novel view synthesis.
arXiv Detail & Related papers (2024-10-14T20:42:30Z) - MomentsNeRF: Leveraging Orthogonal Moments for Few-Shot Neural Rendering [4.6786468967610055]
We propose MomentsNeRF, a novel framework for one- and few-shot neural rendering.
Our architecture offers a new transfer learning method to train on multi-scenes.
Our approach is the first to successfully harness features extracted from Gabor and Zernike moments.
arXiv Detail & Related papers (2024-07-02T21:02:48Z) - D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - Re-Nerfing: Improving Novel View Synthesis through Novel View Synthesis [80.3686833921072]
Recent neural rendering and reconstruction techniques, such as NeRFs or Gaussian Splatting, have shown remarkable novel view synthesis capabilities.
With fewer images available, these methods start to fail since they can no longer correctly triangulate the underlying 3D geometry.
We propose Re-Nerfing, a simple and general add-on approach that leverages novel view synthesis itself to tackle this problem.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - NPBG++: Accelerating Neural Point-Based Graphics [14.366073496519139]
NPBG++ is a novel view synthesis (NVS) task that achieves high rendering realism with low scene fitting time.
Our method efficiently leverages the multiview observations and the point cloud of a static scene to predict a neural descriptor for each point.
In our comparisons, the proposed system outperforms previous NVS approaches in terms of fitting and rendering runtimes while producing images of similar quality.
arXiv Detail & Related papers (2022-03-24T19:59:39Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.