NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior
- URL: http://arxiv.org/abs/2212.07388v3
- Date: Fri, 14 Apr 2023 13:19:14 GMT
- Title: NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior
- Authors: Wenjing Bian, Zirui Wang, Kejie Li, Jia-Wang Bian, Victor Adrian
Prisacariu
- Abstract summary: Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging.
Recent advances in this direction demonstrate the possibility of jointly optimising a NeRF and camera poses in forward-facing scenes.
We tackle this challenging problem by incorporating undistorted monocular depth priors.
These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames.
- Score: 22.579857008706206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a Neural Radiance Field (NeRF) without pre-computed camera poses is
challenging. Recent advances in this direction demonstrate the possibility of
jointly optimising a NeRF and camera poses in forward-facing scenes. However,
these methods still face difficulties during dramatic camera movement. We
tackle this challenging problem by incorporating undistorted monocular depth
priors. These priors are generated by correcting scale and shift parameters
during training, with which we are then able to constrain the relative poses
between consecutive frames. This constraint is achieved using our proposed
novel loss functions. Experiments on real-world indoor and outdoor scenes show
that our method can handle challenging camera trajectories and outperforms
existing methods in terms of novel view rendering quality and pose estimation
accuracy. Our project page is https://nope-nerf.active.vision.
Related papers
- COLMAP-Free 3D Gaussian Splatting [88.420322646756]
We propose a novel method to perform novel view synthesis without any SfM preprocessing.
We process the input frames in a sequential manner and progressively grow the 3D Gaussians set by taking one input frame at a time.
Our method significantly improves over previous approaches in view synthesis and camera pose estimation under large motion changes.
arXiv Detail & Related papers (2023-12-12T18:39:52Z) - IL-NeRF: Incremental Learning for Neural Radiance Fields with Camera
Pose Alignment [12.580584725107173]
We propose IL-NeRF, a novel framework for incremental NeRF training.
We show that IL-NeRF handles incremental NeRF training and outperforms the baselines by up to $54.04%$ in rendering quality.
arXiv Detail & Related papers (2023-12-10T04:12:27Z) - CBARF: Cascaded Bundle-Adjusting Neural Radiance Fields from Imperfect
Camera Poses [23.427859480410934]
We propose a novel 3D reconstruction framework that enables simultaneous optimization of camera poses.
In a nutshell, our framework optimize camera poses in a coarse-to-fine manner and then reconstructs scenes based on the rectified poses.
Experimental results demonstrate that our CBARF model achieves state-of-the-art performance in both pose optimization and novel view synthesis.
arXiv Detail & Related papers (2023-10-15T08:34:40Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary
Image collections [49.3480550339732]
Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics.
We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination.
Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR.
arXiv Detail & Related papers (2022-05-31T13:16:48Z) - GNeRF: GAN-based Neural Radiance Field without Posed Camera [67.80805274569354]
We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly camera poses.
Our approach outperforms the baselines favorably in those scenes with repeated patterns or even low textures that are regarded as extremely challenging before.
arXiv Detail & Related papers (2021-03-29T13:36:38Z) - PhotoApp: Photorealistic Appearance Editing of Head Portraits [97.23638022484153]
We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination in a portrait image.
Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages.
We design a supervised problem which learns in the latent space of StyleGAN.
This combines the best of supervised learning and generative adversarial modeling.
arXiv Detail & Related papers (2021-03-13T08:59:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.