LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs
- URL: http://arxiv.org/abs/2306.05410v1
- Date: Thu, 8 Jun 2023 17:56:22 GMT
- Title: LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs
- Authors: Zezhou Cheng, Carlos Esteves, Varun Jampani, Abhishek Kar, Subhransu
Maji, Ameesh Makadia
- Abstract summary: A critical obstacle preventing NeRF models from being deployed broadly in the wild is their reliance on accurate camera poses.
We propose a novel approach, LU-NeRF, that jointly estimates camera poses and neural fields with relaxed assumptions on pose configuration.
We show our LU-NeRF pipeline outperforms prior attempts at unposed NeRF without making restrictive assumptions on the pose prior.
- Score: 56.050550636941836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A critical obstacle preventing NeRF models from being deployed broadly in the
wild is their reliance on accurate camera poses. Consequently, there is growing
interest in extending NeRF models to jointly optimize camera poses and scene
representation, which offers an alternative to off-the-shelf SfM pipelines
which have well-understood failure modes. Existing approaches for unposed NeRF
operate under limited assumptions, such as a prior pose distribution or coarse
pose initialization, making them less effective in a general setting. In this
work, we propose a novel approach, LU-NeRF, that jointly estimates camera poses
and neural radiance fields with relaxed assumptions on pose configuration. Our
approach operates in a local-to-global manner, where we first optimize over
local subsets of the data, dubbed mini-scenes. LU-NeRF estimates local pose and
geometry for this challenging few-shot task. The mini-scene poses are brought
into a global reference frame through a robust pose synchronization step, where
a final global optimization of pose and scene can be performed. We show our
LU-NeRF pipeline outperforms prior attempts at unposed NeRF without making
restrictive assumptions on the pose prior. This allows us to operate in the
general SE(3) pose setting, unlike the baselines. Our results also indicate our
model can be complementary to feature-based SfM pipelines as it compares
favorably to COLMAP on low-texture and low-resolution images.
Related papers
- CT-NeRF: Incremental Optimizing Neural Radiance Field and Poses with Complex Trajectory [12.460959809597213]
We propose CT-NeRF, an incremental reconstruction optimization pipeline using only RGB images without pose and depth input.
We evaluate the performance of CT-NeRF on two real-world datasets, NeRFBuster and Free-Dataset.
arXiv Detail & Related papers (2024-04-22T06:07:06Z) - ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field [52.09661042881063]
We propose an approach that models the bfprovenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a text field.
We show that modeling per-point provenance during the NeRF optimization enriches the model with information on leading to improvements in novel view synthesis and uncertainty estimation.
arXiv Detail & Related papers (2024-01-16T06:19:18Z) - CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental
Learning [23.080474939586654]
We propose a novel underlinecamera parameter underlinefree neural radiance field (CF-NeRF)
CF-NeRF incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion.
Results demonstrate that CF-NeRF is robust to camera rotation and achieves state-of-the-art results without providing prior information and constraints.
arXiv Detail & Related papers (2023-12-14T09:09:31Z) - Pose-Free Neural Radiance Fields via Implicit Pose Regularization [117.648238941948]
IR-NeRF is an innovative pose-free neural radiance field (NeRF) that introduces implicit pose regularization to refine pose estimator with unposed real images.
With a collection of 2D images of a specific scene, IR-NeRF constructs a scene codebook that stores scene features and captures the scene-specific pose distribution implicitly as priors.
arXiv Detail & Related papers (2023-08-29T06:14:06Z) - DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields [75.35416391705503]
Recent works such as BARF and GARF can adjust camera poses with neural radiance fields (NeRF) which is based on coordinate-MLPs.
Despite the impressive results, these methods cannot be applied to Generalizable NeRFs (GeNeRFs) which require image feature extractions.
In this work, we first analyze the difficulties of jointly optimizing camera poses with GeNeRFs, and then further propose our DBARF to tackle these issues.
arXiv Detail & Related papers (2023-03-25T14:18:30Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - GNeRF: GAN-based Neural Radiance Field without Posed Camera [67.80805274569354]
We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly camera poses.
Our approach outperforms the baselines favorably in those scenes with repeated patterns or even low textures that are regarded as extremely challenging before.
arXiv Detail & Related papers (2021-03-29T13:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.