VM-NeRF: Tackling Sparsity in NeRF with View Morphing
- URL: http://arxiv.org/abs/2210.04214v2
- Date: Wed, 16 Aug 2023 08:55:29 GMT
- Title: VM-NeRF: Tackling Sparsity in NeRF with View Morphing
- Authors: Matteo Bortolon, Alessio Del Bue, Fabio Poiesi
- Abstract summary: NeRF aims to learn a continuous neural scene representation by using a finite set of input images taken from various viewpoints.
This paper introduces a novel method to generate geometrically consistent image transitions between viewpoints using View Morphing.
- Score: 19.418298933260953
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: NeRF aims to learn a continuous neural scene representation by using a finite
set of input images taken from various viewpoints. A well-known limitation of
NeRF methods is their reliance on data: the fewer the viewpoints, the higher
the likelihood of overfitting. This paper addresses this issue by introducing a
novel method to generate geometrically consistent image transitions between
viewpoints using View Morphing. Our VM-NeRF approach requires no prior
knowledge about the scene structure, as View Morphing is based on the
fundamental principles of projective geometry. VM-NeRF tightly integrates this
geometric view generation process during the training procedure of standard
NeRF approaches. Notably, our method significantly improves novel view
synthesis, particularly when only a few views are available. Experimental
evaluation reveals consistent improvement over current methods that handle
sparse viewpoints in NeRF models. We report an increase in PSNR of up to 1.8dB
and 1.0dB when training uses eight and four views, respectively. Source code:
\url{https://github.com/mbortolon97/VM-NeRF}
Related papers
- NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - CorresNeRF: Image Correspondence Priors for Neural Radiance Fields [45.40164120559542]
CorresNeRF is a novel method that leverages image correspondence priors computed by off-the-shelf methods to supervise NeRF training.
We show that this simple yet effective technique of using correspondence priors can be applied as a plug-and-play module across different NeRF variants.
arXiv Detail & Related papers (2023-12-11T18:55:29Z) - Re-Nerfing: Improving Novel View Synthesis through Novel View Synthesis [80.3686833921072]
Recent neural rendering and reconstruction techniques, such as NeRFs or Gaussian Splatting, have shown remarkable novel view synthesis capabilities.
With fewer images available, these methods start to fail since they can no longer correctly triangulate the underlying 3D geometry.
We propose Re-Nerfing, a simple and general add-on approach that leverages novel view synthesis itself to tackle this problem.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - ManifoldNeRF: View-dependent Image Feature Supervision for Few-shot
Neural Radiance Fields [1.8512070255576754]
DietNeRF is an extension of Neural Radiance Fields (NeRF)
DietNeRF assumes that a pre-trained feature extractor should output the same feature even if input images are captured at different viewpoints.
We propose ManifoldNeRF, a method for supervising feature vectors at unknown viewpoints.
arXiv Detail & Related papers (2023-10-20T17:13:52Z) - Federated Neural Radiance Fields [36.42289161746808]
We consider training NeRFs in a federated manner, whereby multiple compute nodes, each having acquired a distinct set of observations of the overall scene, learn a common NeRF in parallel.
Our contribution is the first federated learning algorithm for NeRF, which splits the training effort across multiple compute nodes and obviates the need to pool the images at a central node.
A technique based on low-rank decomposition of NeRF layers is introduced to reduce bandwidth consumption to transmit the model parameters for aggregation.
arXiv Detail & Related papers (2023-05-02T02:33:22Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency [31.22435282922934]
We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization.
We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.
arXiv Detail & Related papers (2023-01-26T05:14:12Z) - PANeRF: Pseudo-view Augmentation for Improved Neural Radiance Fields
Based on Few-shot Inputs [3.818285175392197]
neural radiance fields (NeRF) have promising applications for novel views of complex scenes.
NeRF requires dense input views, typically numbering in the hundreds, for generating high-quality images.
We propose pseudo-view augmentation of NeRF, a scheme that expands a sufficient amount of data by considering the geometry of few-shot inputs.
arXiv Detail & Related papers (2022-11-23T08:01:10Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.