Fast and Explicit Neural View Synthesis
- URL: http://arxiv.org/abs/2107.05775v1
- Date: Mon, 12 Jul 2021 23:24:53 GMT
- Title: Fast and Explicit Neural View Synthesis
- Authors: Pengsheng Guo, Miguel Angel Bautista, Alex Colburn, Liang Yang, Daniel
Ulbricht, Joshua M. Susskind, Qi Shan
- Abstract summary: We study the problem of novel view synthesis of a scene comprised of 3D objects.
We propose a simple yet effective approach that is neither continuous nor implicit.
Our model is trained in a category-agnostic manner and does not require scene-specific optimization.
- Score: 17.811091108978463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of novel view synthesis of a scene comprised of 3D
objects. We propose a simple yet effective approach that is neither continuous
nor implicit, challenging recent trends on view synthesis. We demonstrate that
although continuous radiance field representations have gained a lot of
attention due to their expressive power, our simple approach obtains comparable
or even better novel view reconstruction quality comparing with
state-of-the-art baselines while increasing rendering speed by over 400x. Our
model is trained in a category-agnostic manner and does not require
scene-specific optimization. Therefore, it is able to generalize novel view
synthesis to object categories not seen during training. In addition, we show
that with our simple formulation, we can use view synthesis as a
self-supervision signal for efficient learning of 3D geometry without explicit
3D supervision.
Related papers
- ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis [63.169364481672915]
We propose textbfViewCrafter, a novel method for synthesizing high-fidelity novel views of generic scenes from single or sparse images.
Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames.
arXiv Detail & Related papers (2024-09-03T16:53:19Z) - SparseCraft: Few-Shot Neural Reconstruction through Stereopsis Guided Geometric Linearization [7.769607568805291]
We present a novel approach for recovering 3D shape and view dependent appearance from a few colored images.
Our method learns an implicit neural representation in the form of a Signed Distance Function (SDF) and a radiance field.
Key to our contribution is a novel implicit neural shape function learning strategy that encourages our SDF field to be as linear as possible near the level-set.
arXiv Detail & Related papers (2024-07-19T12:36:36Z) - Efficient Depth-Guided Urban View Synthesis [52.841803876653465]
We introduce Efficient Depth-Guided Urban View Synthesis (EDUS) for fast feed-forward inference and efficient per-scene fine-tuning.
EDUS exploits noisy predicted geometric priors as guidance to enable generalizable urban view synthesis from sparse input images.
Our results indicate that EDUS achieves state-of-the-art performance in sparse view settings when combined with fast test-time optimization.
arXiv Detail & Related papers (2024-07-17T08:16:25Z) - Efficient-3DiM: Learning a Generalizable Single-image Novel-view
Synthesizer in One Day [63.96075838322437]
We propose a framework to learn a single-image novel-view synthesizer.
Our framework is able to reduce the total training time from 10 days to less than 1 day.
arXiv Detail & Related papers (2023-10-04T17:57:07Z) - Template-free Articulated Neural Point Clouds for Reposable View
Synthesis [11.535440791891217]
We present a novel method to jointly learn a Dynamic NeRF and an associated skeletal model from even sparse multi-view video.
Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses.
arXiv Detail & Related papers (2023-05-30T14:28:08Z) - Learning to Render Novel Views from Wide-Baseline Stereo Pairs [26.528667940013598]
We introduce a method for novel view synthesis given only a single wide-baseline stereo image pair.
Existing approaches to novel view synthesis from sparse observations fail due to recovering incorrect 3D geometry.
We propose an efficient, image-space epipolar line sampling scheme to assemble image features for a target ray.
arXiv Detail & Related papers (2023-04-17T17:40:52Z) - Generative Novel View Synthesis with 3D-Aware Diffusion Models [96.78397108732233]
We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image.
Our method makes use of existing 2D diffusion backbones but, crucially, incorporates geometry priors in the form of a 3D feature volume.
In addition to generating novel views, our method has the ability to autoregressively synthesize 3D-consistent sequences.
arXiv Detail & Related papers (2023-04-05T17:15:47Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - ProbNVS: Fast Novel View Synthesis with Learned Probability-Guided
Sampling [42.37704606186928]
We propose to build a novel view synthesis framework based on learned MVS priors.
We show that our method achieves 15 to 40 times faster rendering compared to state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-07T14:45:42Z) - Street-view Panoramic Video Synthesis from a Single Satellite Image [92.26826861266784]
We present a novel method for synthesizing both temporally and geometrically consistent street-view panoramic video.
Existing cross-view synthesis approaches focus more on images, while video synthesis in such a case has not yet received enough attention.
arXiv Detail & Related papers (2020-12-11T20:22:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.