DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields
- URL: http://arxiv.org/abs/2303.14478v1
- Date: Sat, 25 Mar 2023 14:18:30 GMT
- Title: DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields
- Authors: Yu Chen, Gim Hee Lee
- Abstract summary: Recent works such as BARF and GARF can adjust camera poses with neural radiance fields (NeRF) which is based on coordinate-MLPs.
Despite the impressive results, these methods cannot be applied to Generalizable NeRFs (GeNeRFs) which require image feature extractions.
In this work, we first analyze the difficulties of jointly optimizing camera poses with GeNeRFs, and then further propose our DBARF to tackle these issues.
- Score: 75.35416391705503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent works such as BARF and GARF can bundle adjust camera poses with neural
radiance fields (NeRF) which is based on coordinate-MLPs. Despite the
impressive results, these methods cannot be applied to Generalizable NeRFs
(GeNeRFs) which require image feature extractions that are often based on more
complicated 3D CNN or transformer architectures. In this work, we first analyze
the difficulties of jointly optimizing camera poses with GeNeRFs, and then
further propose our DBARF to tackle these issues. Our DBARF which bundle
adjusts camera poses by taking a cost feature map as an implicit cost function
can be jointly trained with GeNeRFs in a self-supervised manner. Unlike BARF
and its follow-up works, which can only be applied to per-scene optimized NeRFs
and need accurate initial camera poses with the exception of forward-facing
scenes, our method can generalize across scenes and does not require any good
initialization. Experiments show the effectiveness and generalization ability
of our DBARF when evaluated on real-world datasets. Our code is available at
\url{https://aibluefisher.github.io/dbarf}.
Related papers
- ZeroGS: Training 3D Gaussian Splatting from Unposed Images [62.34149221132978]
We propose ZeroGS to train 3DGS from hundreds of unposed and unordered images.
Our method leverages a pretrained foundation model as the neural scene representation.
Our method recovers more accurate camera poses than state-of-the-art pose-free NeRF/3DGS methods.
arXiv Detail & Related papers (2024-11-24T11:20:48Z) - CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental
Learning [23.080474939586654]
We propose a novel underlinecamera parameter underlinefree neural radiance field (CF-NeRF)
CF-NeRF incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion.
Results demonstrate that CF-NeRF is robust to camera rotation and achieves state-of-the-art results without providing prior information and constraints.
arXiv Detail & Related papers (2023-12-14T09:09:31Z) - IL-NeRF: Incremental Learning for Neural Radiance Fields with Camera
Pose Alignment [12.580584725107173]
We propose IL-NeRF, a novel framework for incremental NeRF training.
We show that IL-NeRF handles incremental NeRF training and outperforms the baselines by up to $54.04%$ in rendering quality.
arXiv Detail & Related papers (2023-12-10T04:12:27Z) - LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs [56.050550636941836]
A critical obstacle preventing NeRF models from being deployed broadly in the wild is their reliance on accurate camera poses.
We propose a novel approach, LU-NeRF, that jointly estimates camera poses and neural fields with relaxed assumptions on pose configuration.
We show our LU-NeRF pipeline outperforms prior attempts at unposed NeRF without making restrictive assumptions on the pose prior.
arXiv Detail & Related papers (2023-06-08T17:56:22Z) - Structure-Aware NeRF without Posed Camera via Epipolar Constraint [8.115535686311249]
The neural radiance field (NeRF) for realistic novel view synthesis requires camera poses to be pre-acquired.
We integrate the pose extraction and view synthesis into a single end-to-end procedure so they can benefit from each other.
arXiv Detail & Related papers (2022-10-01T03:57:39Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - GNeRF: GAN-based Neural Radiance Field without Posed Camera [67.80805274569354]
We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly camera poses.
Our approach outperforms the baselines favorably in those scenes with repeated patterns or even low textures that are regarded as extremely challenging before.
arXiv Detail & Related papers (2021-03-29T13:36:38Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.