GNeRF: GAN-based Neural Radiance Field without Posed Camera
- URL: http://arxiv.org/abs/2103.15606v2
- Date: Tue, 30 Mar 2021 15:32:11 GMT
- Title: GNeRF: GAN-based Neural Radiance Field without Posed Camera
- Authors: Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming
He, Jingyi Yu
- Abstract summary: We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly camera poses.
Our approach outperforms the baselines favorably in those scenes with repeated patterns or even low textures that are regarded as extremely challenging before.
- Score: 67.80805274569354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce GNeRF, a framework to marry Generative Adversarial Networks
(GAN) with Neural Radiance Field reconstruction for the complex scenarios with
unknown and even randomly initialized camera poses. Recent NeRF-based advances
have gained popularity for remarkable realistic novel view synthesis. However,
most of them heavily rely on accurate camera poses estimation, while few recent
methods can only optimize the unknown camera poses in roughly forward-facing
scenes with relatively short camera trajectories and require rough camera poses
initialization. Differently, our GNeRF only utilizes randomly initialized poses
for complex outside-in scenarios. We propose a novel two-phases end-to-end
framework. The first phase takes the use of GANs into the new realm for coarse
camera poses and radiance fields jointly optimization, while the second phase
refines them with additional photometric loss. We overcome local minima using a
hybrid and iterative optimization scheme. Extensive experiments on a variety of
synthetic and natural scenes demonstrate the effectiveness of GNeRF. More
impressively, our approach outperforms the baselines favorably in those scenes
with repeated patterns or even low textures that are regarded as extremely
challenging before.
Related papers
- CT-NeRF: Incremental Optimizing Neural Radiance Field and Poses with Complex Trajectory [12.460959809597213]
We propose CT-NeRF, an incremental reconstruction optimization pipeline using only RGB images without pose and depth input.
We evaluate the performance of CT-NeRF on two real-world datasets, NeRFBuster and Free-Dataset.
arXiv Detail & Related papers (2024-04-22T06:07:06Z) - CBARF: Cascaded Bundle-Adjusting Neural Radiance Fields from Imperfect
Camera Poses [23.427859480410934]
We propose a novel 3D reconstruction framework that enables simultaneous optimization of camera poses.
In a nutshell, our framework optimize camera poses in a coarse-to-fine manner and then reconstructs scenes based on the rectified poses.
Experimental results demonstrate that our CBARF model achieves state-of-the-art performance in both pose optimization and novel view synthesis.
arXiv Detail & Related papers (2023-10-15T08:34:40Z) - Pose-Free Neural Radiance Fields via Implicit Pose Regularization [117.648238941948]
IR-NeRF is an innovative pose-free neural radiance field (NeRF) that introduces implicit pose regularization to refine pose estimator with unposed real images.
With a collection of 2D images of a specific scene, IR-NeRF constructs a scene codebook that stores scene features and captures the scene-specific pose distribution implicitly as priors.
arXiv Detail & Related papers (2023-08-29T06:14:06Z) - LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs [56.050550636941836]
A critical obstacle preventing NeRF models from being deployed broadly in the wild is their reliance on accurate camera poses.
We propose a novel approach, LU-NeRF, that jointly estimates camera poses and neural fields with relaxed assumptions on pose configuration.
We show our LU-NeRF pipeline outperforms prior attempts at unposed NeRF without making restrictive assumptions on the pose prior.
arXiv Detail & Related papers (2023-06-08T17:56:22Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.