PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields
- URL: http://arxiv.org/abs/2312.10649v1
- Date: Sun, 17 Dec 2023 08:30:00 GMT
- Title: PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields
- Authors: Boming Zhao, Luwei Yang, Mao Mao, Hujun Bao, Zhaopeng Cui
- Abstract summary: We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
- Score: 54.8553158441296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the ability to synthesize high-quality novel views, Neural Radiance
Fields (NeRF) have been recently exploited to improve visual localization in a
known environment. However, the existing methods mostly utilize NeRFs for data
augmentation to improve the regression model training, and the performance on
novel viewpoints and appearances is still limited due to the lack of geometric
constraints. In this paper, we propose a novel visual localization framework,
\ie, PNeRFLoc, based on a unified point-based representation. On the one hand,
PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature
points as traditional structure-based methods; on the other hand, it also
enables pose refinement with novel view synthesis using rendering-based
optimization. Specifically, we propose a novel feature adaption module to close
the gaps between the features for visual localization and neural rendering. To
improve the efficacy and efficiency of neural rendering-based optimization, we
also develop an efficient rendering-based framework with a warping loss
function. Furthermore, several robustness techniques are developed to handle
illumination changes and dynamic objects for outdoor scenarios. Experiments
demonstrate that PNeRFLoc performs the best on synthetic data when the NeRF
model can be well learned and performs on par with the SOTA method on the
visual localization benchmark datasets.
Related papers
- KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - MonoPatchNeRF: Improving Neural Radiance Fields with Patch-based Monocular Guidance [29.267039546199094]
In this paper, we aim to create 3D models that provide accurate geometry and view synthesis.
We propose a patch-based approach that effectively leverages monocular surface normal and relative depth predictions.
Trials show 4x the performance of RegNeRF and 8x that of FreeNeRF on average F1@2cm for ETH3D MVS benchmark.
arXiv Detail & Related papers (2024-04-12T05:43:10Z) - Leveraging Neural Radiance Field in Descriptor Synthesis for Keypoints Scene Coordinate Regression [1.2974519529978974]
This paper introduces a pipeline for keypoint descriptor synthesis using Neural Radiance Field (NeRF)
generating novel poses and feeding them into a trained NeRF model to create new views, our approach enhances the KSCR's capabilities in data-scarce environments.
The proposed system could significantly improve localization accuracy by up to 50% and cost only a fraction of time for data synthesis.
arXiv Detail & Related papers (2024-03-15T13:40:37Z) - Anisotropic Neural Representation Learning for High-Quality Neural
Rendering [0.0]
We propose an anisotropic neural representation learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction.
Our method is flexiable and can be plugged into NeRF-based frameworks.
arXiv Detail & Related papers (2023-11-30T07:29:30Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for
Enhanced Indoor View Synthesis [51.49008959209671]
We introduce VoxNeRF, a novel approach that leverages volumetric representations to enhance the quality and efficiency of indoor view synthesis.
We employ multi-resolution hash grids to adaptively capture spatial features, effectively managing occlusions and the intricate geometry of indoor scenes.
We validate our approach against three public indoor datasets and demonstrate that VoxNeRF outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.