BARF: Bundle-Adjusting Neural Radiance Fields
- URL: http://arxiv.org/abs/2104.06405v1
- Date: Tue, 13 Apr 2021 17:59:51 GMT
- Title: BARF: Bundle-Adjusting Neural Radiance Fields
- Authors: Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, Simon Lucey
- Abstract summary: We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
- Score: 104.97810696435766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRF) have recently gained a surge of interest within
the computer vision community for its power to synthesize photorealistic novel
views of real-world scenes. One limitation of NeRF, however, is its requirement
of accurate camera poses to learn the scene representations. In this paper, we
propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from
imperfect (or even unknown) camera poses -- the joint problem of learning
neural 3D representations and registering camera frames. We establish a
theoretical connection to classical image alignment and show that
coarse-to-fine registration is also applicable to NeRF. Furthermore, we show
that na\"ively applying positional encoding in NeRF has a negative impact on
registration with a synthesis-based objective. Experiments on synthetic and
real-world data show that BARF can effectively optimize the neural scene
representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown
camera poses, opening up new avenues for visual localization systems (e.g.
SLAM) and potential applications for dense 3D mapping and reconstruction.
Related papers
- BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream [11.183799667913815]
We show the possibility to recover the neural radiance fields (NeRF) from a single blurry image and its corresponding event stream.
Our method can jointly learn both the implicit neural scene representation and recover the camera motion.
arXiv Detail & Related papers (2024-07-02T11:28:22Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - NeRFuser: Large-Scale Scene Representation by NeRF Fusion [35.749208740102546]
A practical benefit of implicit visual representations like Neural Radiance Fields (NeRFs) is their memory efficiency.
We propose NeRFuser, a novel architecture for NeRF registration and blending that assumes only access to pre-generated NeRFs.
arXiv Detail & Related papers (2023-05-22T17:59:05Z) - BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields [9.744593647024253]
We present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF)
BAD-NeRF can be robust to severe motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process of a motion blurred image, and jointly learns the parameters of NeRF.
arXiv Detail & Related papers (2022-11-23T10:53:37Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields [36.09829614806658]
We propose L2G-NeRF, a Local-to-Global registration method for Neural Radiance Fields.
Pixel-wise local alignment is learned in an unsupervised way via a deep network.
Our method outperforms the current state-of-the-art in terms of high-fidelity reconstruction and resolving large camera pose misalignment.
arXiv Detail & Related papers (2022-11-21T14:43:16Z) - E-NeRF: Neural Radiance Fields from a Moving Event Camera [83.91656576631031]
Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community.
We present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera.
arXiv Detail & Related papers (2022-08-24T04:53:32Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.