ProbNVS: Fast Novel View Synthesis with Learned Probability-Guided
Sampling
- URL: http://arxiv.org/abs/2204.03476v1
- Date: Thu, 7 Apr 2022 14:45:42 GMT
- Title: ProbNVS: Fast Novel View Synthesis with Learned Probability-Guided
Sampling
- Authors: Yuemei Zhou, Tao Yu, Zerong Zheng, Ying Fu, Yebin Liu
- Abstract summary: We propose to build a novel view synthesis framework based on learned MVS priors.
We show that our method achieves 15 to 40 times faster rendering compared to state-of-the-art baselines.
- Score: 42.37704606186928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing state-of-the-art novel view synthesis methods rely on either fairly
accurate 3D geometry estimation or sampling of the entire space for neural
volumetric rendering, which limit the overall efficiency. In order to improve
the rendering efficiency by reducing sampling points without sacrificing
rendering quality, we propose to build a novel view synthesis framework based
on learned MVS priors that enables general, fast and photo-realistic view
synthesis simultaneously. Specifically, fewer but important points are sampled
under the guidance of depth probability distributions extracted from the
learned MVS architecture. Based on the learned probability-guided sampling, a
neural volume rendering module is elaborately devised to fully aggregate source
view information as well as the learned scene structures to synthesize
photorealistic target view images. Finally, the rendering results in uncertain,
occluded and unreferenced regions can be further improved by incorporating a
confidence-aware refinement module. Experiments show that our method achieves
15 to 40 times faster rendering compared to state-of-the-art baselines, with
strong generalization capacity and comparable high-quality novel view synthesis
performance.
Related papers
- Efficient Depth-Guided Urban View Synthesis [52.841803876653465]
We introduce Efficient Depth-Guided Urban View Synthesis (EDUS) for fast feed-forward inference and efficient per-scene fine-tuning.
EDUS exploits noisy predicted geometric priors as guidance to enable generalizable urban view synthesis from sparse input images.
Our results indicate that EDUS achieves state-of-the-art performance in sparse view settings when combined with fast test-time optimization.
arXiv Detail & Related papers (2024-07-17T08:16:25Z) - FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting [58.41056963451056]
We propose a few-shot view synthesis framework based on 3D Gaussian Splatting.
This framework enables real-time and photo-realistic view synthesis with as few as three training views.
FSGS achieves state-of-the-art performance in both accuracy and rendering efficiency across diverse datasets.
arXiv Detail & Related papers (2023-12-01T09:30:02Z) - Generative Novel View Synthesis with 3D-Aware Diffusion Models [96.78397108732233]
We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image.
Our method makes use of existing 2D diffusion backbones but, crucially, incorporates geometry priors in the form of a 3D feature volume.
In addition to generating novel views, our method has the ability to autoregressively synthesize 3D-consistent sequences.
arXiv Detail & Related papers (2023-04-05T17:15:47Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - Point-Based Neural Rendering with Per-View Optimization [5.306819482496464]
We introduce a general approach that is with MVS, but allows further optimization of scene properties in the space of input views.
A key element of our approach is our new differentiable point-based pipeline.
We use these elements together in our neural splatting, that outperforms all previous methods both in quality and speed in almost all scenes we tested.
arXiv Detail & Related papers (2021-09-06T11:19:31Z) - Fast and Explicit Neural View Synthesis [17.811091108978463]
We study the problem of novel view synthesis of a scene comprised of 3D objects.
We propose a simple yet effective approach that is neither continuous nor implicit.
Our model is trained in a category-agnostic manner and does not require scene-specific optimization.
arXiv Detail & Related papers (2021-07-12T23:24:53Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.