VMRF: View Matching Neural Radiance Fields
- URL: http://arxiv.org/abs/2207.02621v2
- Date: Sun, 11 Jun 2023 18:40:36 GMT
- Title: VMRF: View Matching Neural Radiance Fields
- Authors: Jiahui Zhang and Fangneng Zhan and Rongliang Wu and Yingchen Yu and
Wenqing Zhang and Bai Song and Xiaoqin Zhang and Shijian Lu
- Abstract summary: VMRF is an innovative view matching NeRF that enables effective NeRF training without requiring prior knowledge in camera poses or camera pose distributions.
VMRF introduces a view matching scheme, which exploits unbalanced optimal transport to produce a feature transport plan for mapping a rendered image with randomly camera pose to the corresponding real image.
With the feature transport plan as the guidance, a novel pose calibration technique is designed which rectifies the initially randomized camera poses by predicting relative pose between the pair of rendered and real images.
- Score: 57.93631771072756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) have demonstrated very impressive performance
in novel view synthesis via implicitly modelling 3D representations from
multi-view 2D images. However, most existing studies train NeRF models with
either reasonable camera pose initialization or manually-crafted camera pose
distributions which are often unavailable or hard to acquire in various
real-world data. We design VMRF, an innovative view matching NeRF that enables
effective NeRF training without requiring prior knowledge in camera poses or
camera pose distributions. VMRF introduces a view matching scheme, which
exploits unbalanced optimal transport to produce a feature transport plan for
mapping a rendered image with randomly initialized camera pose to the
corresponding real image. With the feature transport plan as the guidance, a
novel pose calibration technique is designed which rectifies the initially
randomized camera poses by predicting relative pose transformations between the
pair of rendered and real images. Extensive experiments over a number of
synthetic and real datasets show that the proposed VMRF outperforms the
state-of-the-art qualitatively and quantitatively by large margins.
Related papers
- Pose-Free Neural Radiance Fields via Implicit Pose Regularization [117.648238941948]
IR-NeRF is an innovative pose-free neural radiance field (NeRF) that introduces implicit pose regularization to refine pose estimator with unposed real images.
With a collection of 2D images of a specific scene, IR-NeRF constructs a scene codebook that stores scene features and captures the scene-specific pose distribution implicitly as priors.
arXiv Detail & Related papers (2023-08-29T06:14:06Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - HyperNeRFGAN: Hypernetwork approach to 3D NeRF GAN [0.4606926019499777]
This paper introduces HyperNeRFGAN, a Generative Adversarial Network (GAN) architecture employing a hypernetwork paradigm to transform a Gaussian noise into the weights of a NeRF architecture.
Despite its notable simplicity in comparison to existing state-of-the-art alternatives, the proposed model demonstrates superior performance on a diverse range of image datasets.
arXiv Detail & Related papers (2023-01-27T10:21:18Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Robustifying the Multi-Scale Representation of Neural Radiance Fields [86.69338893753886]
We present a robust multi-scale neural radiance fields representation approach to overcome both real-world imaging issues.
Our method handles multi-scale imaging effects and camera-pose estimation problems with NeRF-inspired approaches.
We demonstrate, with examples, that for an accurate neural representation of an object from day-to-day acquired multi-view images, it is crucial to have precise camera-pose estimates.
arXiv Detail & Related papers (2022-10-09T11:46:45Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - Nerfies: Deformable Neural Radiance Fields [44.923025540903886]
We present the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones.
Our approach augments neural radiance fields (NeRF) by optimizing an additional continuous volumetric deformation field that warps each observed point into a canonical 5D NeRF.
We show that our method faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.
arXiv Detail & Related papers (2020-11-25T18:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.