Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields
- URL: http://arxiv.org/abs/2211.11505v1
- Date: Mon, 21 Nov 2022 14:43:16 GMT
- Title: Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields
- Authors: Yue Chen, Xingyu Chen, Xuan Wang, Qi Zhang, Yu Guo, Ying Shan and Fei
Wang
- Abstract summary: We propose L2G-NeRF, a Local-to-Global registration method for Neural Radiance Fields.
Pixel-wise local alignment is learned in an unsupervised way via a deep network.
Our method outperforms the current state-of-the-art in terms of high-fidelity reconstruction and resolving large camera pose misalignment.
- Score: 36.09829614806658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) have achieved photorealistic novel views
synthesis; however, the requirement of accurate camera poses limits its
application. Despite analysis-by-synthesis extensions for jointly learning
neural 3D representations and registering camera frames exist, they are
susceptible to suboptimal solutions if poorly initialized. We propose L2G-NeRF,
a Local-to-Global registration method for bundle-adjusting Neural Radiance
Fields: first, a pixel-wise flexible alignment, followed by a frame-wise
constrained parametric alignment. Pixel-wise local alignment is learned in an
unsupervised way via a deep network which optimizes photometric reconstruction
errors. Frame-wise global alignment is performed using differentiable parameter
estimation solvers on the pixel-wise correspondences to find a global
transformation. Experiments on synthetic and real-world data show that our
method outperforms the current state-of-the-art in terms of high-fidelity
reconstruction and resolving large camera pose misalignment. Our module is an
easy-to-use plugin that can be applied to NeRF variants and other neural field
applications. The Code and supplementary materials are available at
https://rover-xingyu.github.io/L2G-NeRF/.
Related papers
- RS-NeRF: Neural Radiance Fields from Rolling Shutter Images [30.719764073204423]
We present RS-NeRF, a method designed to synthesize normal images from novel views using input with RS distortions.
This involves a physical model that replicates the image formation process under RS conditions.
We further address the inherent shortcomings of the basic RS-NeRF model by delving into the RS characteristics and developing algorithms to enhance its functionality.
arXiv Detail & Related papers (2024-07-14T16:27:11Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - BID-NeRF: RGB-D image pose estimation with inverted Neural Radiance
Fields [0.0]
We aim to improve the Inverted Neural Radiance Fields (iNeRF) algorithm which defines the image pose estimation problem as a NeRF based iterative linear optimization.
NeRFs are novel neural space representation models that can synthesize photorealistic novel views of real-world scenes or objects.
arXiv Detail & Related papers (2023-10-05T14:27:06Z) - NeuRBF: A Neural Fields Representation with Adaptive Radial Basis
Functions [93.02515761070201]
We present a novel type of neural fields that uses general radial bases for signal representation.
Our method builds upon general radial bases with flexible kernel position and shape, which have higher spatial adaptivity and can more closely fit target signals.
When applied to neural radiance field reconstruction, our method achieves state-of-the-art rendering quality, with small model size and comparable training speed.
arXiv Detail & Related papers (2023-09-27T06:32:05Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Deblur-NeRF: Neural Radiance Fields from Blurry Images [30.709331199256376]
We propose De-NeRF, the first method that can recover a sharp NeRF from blurry input.
We adopt an analysis-by-blur approach that reconstructs blurry views by simulating the blurring process.
We demonstrate that our method can be used on both camera motion blur and defocus blur: the two most common types of blur in real scenes.
arXiv Detail & Related papers (2021-11-29T01:49:15Z) - LENS: Localization enhanced by NeRF synthesis [3.4386226615580107]
We demonstrate improvement of camera pose regression thanks to an additional synthetic dataset rendered by the NeRF class of algorithm.
We further improved localization accuracy of pose regressors using synthesized realistic and geometry consistent images as data augmentation during training.
arXiv Detail & Related papers (2021-10-13T08:15:08Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.