SPARF: Neural Radiance Fields from Sparse and Noisy Poses
- URL: http://arxiv.org/abs/2211.11738v3
- Date: Tue, 13 Jun 2023 14:32:05 GMT
- Title: SPARF: Neural Radiance Fields from Sparse and Noisy Poses
- Authors: Prune Truong and Marie-Julie Rakotosaona and Fabian Manhardt and
Federico Tombari
- Abstract summary: We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
- Score: 58.528358231885846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Field (NeRF) has recently emerged as a powerful
representation to synthesize photorealistic novel views. While showing
impressive performance, it relies on the availability of dense input views with
highly accurate camera poses, thus limiting its application in real-world
scenarios. In this work, we introduce Sparse Pose Adjusting Radiance Field
(SPARF), to address the challenge of novel-view synthesis given only few
wide-baseline input images (as low as 3) with noisy camera poses. Our approach
exploits multi-view geometry constraints in order to jointly learn the NeRF and
refine the camera poses. By relying on pixel matches extracted between the
input views, our multi-view correspondence objective enforces the optimized
scene and camera poses to converge to a global and geometrically accurate
solution. Our depth consistency loss further encourages the reconstructed scene
to be consistent from any viewpoint. Our approach sets a new state of the art
in the sparse-view regime on multiple challenging datasets.
Related papers
- CT-NeRF: Incremental Optimizing Neural Radiance Field and Poses with Complex Trajectory [12.460959809597213]
We propose CT-NeRF, an incremental reconstruction optimization pipeline using only RGB images without pose and depth input.
We evaluate the performance of CT-NeRF on two real-world datasets, NeRFBuster and Free-Dataset.
arXiv Detail & Related papers (2024-04-22T06:07:06Z) - MetaCap: Meta-learning Priors from Multi-View Imagery for Sparse-view Human Performance Capture and Rendering [91.76893697171117]
We propose a method for efficient and high-quality geometry recovery and novel view synthesis given very sparse or even a single view of the human.
Our key idea is to meta-learn the radiance field weights solely from potentially sparse multi-view videos.
We collect a new dataset, WildDynaCap, which contains subjects captured in, both, a dense camera dome and in-the-wild sparse camera rigs.
arXiv Detail & Related papers (2024-03-27T17:59:54Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis [86.38901313994734]
We present DietNeRF, a 3D neural scene representation estimated from a few images.
NeRF learns a continuous volumetric representation of a scene through multi-view consistency.
We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses.
arXiv Detail & Related papers (2021-04-01T17:59:31Z) - GNeRF: GAN-based Neural Radiance Field without Posed Camera [67.80805274569354]
We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly camera poses.
Our approach outperforms the baselines favorably in those scenes with repeated patterns or even low textures that are regarded as extremely challenging before.
arXiv Detail & Related papers (2021-03-29T13:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.