A Comparative Neural Radiance Field (NeRF) 3D Analysis of Camera Poses
from HoloLens Trajectories and Structure from Motion
- URL: http://arxiv.org/abs/2304.10664v1
- Date: Thu, 20 Apr 2023 22:17:28 GMT
- Title: A Comparative Neural Radiance Field (NeRF) 3D Analysis of Camera Poses
from HoloLens Trajectories and Structure from Motion
- Authors: Miriam J\"ager, Patrick H\"ubner, Dennis Haitz, Boris Jutzi
- Abstract summary: We present a workflow for high-resolution 3D reconstructions almost directly from HoloLens data using Neural Radiance Fields (NeRFs)
NeRFs are trained using a set of camera poses and associated images as input to estimate density and color values for each position.
Results show that the internal camera poses lead to NeRF convergence with a PSNR of 25,dB with a simple rotation around the x-axis and enable a 3D reconstruction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRFs) are trained using a set of camera poses and
associated images as input to estimate density and color values for each
position. The position-dependent density learning is of particular interest for
photogrammetry, enabling 3D reconstruction by querying and filtering the NeRF
coordinate system based on the object density. While traditional methods like
Structure from Motion are commonly used for camera pose calculation in
pre-processing for NeRFs, the HoloLens offers an interesting interface for
extracting the required input data directly. We present a workflow for
high-resolution 3D reconstructions almost directly from HoloLens data using
NeRFs. Thereby, different investigations are considered: Internal camera poses
from the HoloLens trajectory via a server application, and external camera
poses from Structure from Motion, both with an enhanced variant applied through
pose refinement. Results show that the internal camera poses lead to NeRF
convergence with a PSNR of 25\,dB with a simple rotation around the x-axis and
enable a 3D reconstruction. Pose refinement enables comparable quality compared
to external camera poses, resulting in improved training process with a PSNR of
27\,dB and a better 3D reconstruction. Overall, NeRF reconstructions outperform
the conventional photogrammetric dense reconstruction using Multi-View Stereo
in terms of completeness and level of detail.
Related papers
- Evaluating geometric accuracy of NeRF reconstructions compared to SLAM method [0.0]
Photogrammetry can perform image-based 3D reconstruction but is computationally expensive and requires extremely dense image representation to recover complex geometry and photorealism.
NeRFs perform 3D scene reconstruction by training a neural network on sparse image and pose data, achieving superior results to photogrammetry with less input data.
This paper presents an evaluation of two NeRF scene reconstructions for the purpose of estimating the diameter of a vertical PVC cylinder.
arXiv Detail & Related papers (2024-07-15T21:04:11Z) - CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images [12.603775893040972]
We propose continuous rigid motion-aware gaussian splatting (CRiM-GS) to reconstruct accurate 3D scene from blurry images with real-time rendering speed.
We leverage rigid body transformations to model the camera motion with proper regularization, preserving the shape and size of the object.
Furthermore, we introduce a continuous deformable 3D transformation in the textitSE(3) field to adapt the rigid body transformation to real-world problems.
arXiv Detail & Related papers (2024-07-04T13:37:04Z) - CT-NeRF: Incremental Optimizing Neural Radiance Field and Poses with Complex Trajectory [12.460959809597213]
We propose CT-NeRF, an incremental reconstruction optimization pipeline using only RGB images without pose and depth input.
We evaluate the performance of CT-NeRF on two real-world datasets, NeRFBuster and Free-Dataset.
arXiv Detail & Related papers (2024-04-22T06:07:06Z) - NeSLAM: Neural Implicit Mapping and Self-Supervised Feature Tracking With Depth Completion and Denoising [23.876281686625134]
We present NeSLAM, a framework that achieves accurate and dense depth estimation, robust camera tracking, and realistic synthesis of novel views.
Experiments on various indoor datasets demonstrate the effectiveness and accuracy of the system in reconstruction, tracking quality, and novel view synthesis.
arXiv Detail & Related papers (2024-03-29T07:59:37Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields [71.94156412354054]
We propose Dynamic Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields (DynaMoN)
DynaMoN handles dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis.
We extensively evaluate our approach on two real-world dynamic datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset.
arXiv Detail & Related papers (2023-09-16T08:46:59Z) - 3D Reconstruction of Spherical Images based on Incremental Structure
from Motion [2.6432771146480283]
This study investigates the algorithms for the relative orientation using spherical correspondences, absolute orientation using 3D correspondences between scene and spherical points, and the cost functions for BA (bundle adjustment) optimization.
An incremental SfM (Structure from Motion) workflow has been proposed for spherical images using the above-mentioned algorithms.
arXiv Detail & Related papers (2023-06-22T09:49:28Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - VMRF: View Matching Neural Radiance Fields [57.93631771072756]
VMRF is an innovative view matching NeRF that enables effective NeRF training without requiring prior knowledge in camera poses or camera pose distributions.
VMRF introduces a view matching scheme, which exploits unbalanced optimal transport to produce a feature transport plan for mapping a rendered image with randomly camera pose to the corresponding real image.
With the feature transport plan as the guidance, a novel pose calibration technique is designed which rectifies the initially randomized camera poses by predicting relative pose between the pair of rendered and real images.
arXiv Detail & Related papers (2022-07-06T12:26:40Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.