CROSSFIRE: Camera Relocalization On Self-Supervised Features from an
Implicit Representation
- URL: http://arxiv.org/abs/2303.04869v2
- Date: Tue, 22 Aug 2023 09:21:46 GMT
- Title: CROSSFIRE: Camera Relocalization On Self-Supervised Features from an
Implicit Representation
- Authors: Arthur Moreau, Nathan Piasco, Moussab Bennehar, Dzmitry Tsishkou,
Bogdan Stanciulescu, Arnaud de La Fortelle
- Abstract summary: We use Neural Radiance Fields as an implicit map of a given scene and propose a camera relocalization tailored for this representation.
The proposed method enables to compute in real-time the precise position of a device using a single RGB camera, during its navigation.
- Score: 3.565151496245487
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Beyond novel view synthesis, Neural Radiance Fields are useful for
applications that interact with the real world. In this paper, we use them as
an implicit map of a given scene and propose a camera relocalization algorithm
tailored for this representation. The proposed method enables to compute in
real-time the precise position of a device using a single RGB camera, during
its navigation. In contrast with previous work, we do not rely on pose
regression or photometric alignment but rather use dense local features
obtained through volumetric rendering which are specialized on the scene with a
self-supervised objective. As a result, our algorithm is more accurate than
competitors, able to operate in dynamic outdoor environments with changing
lightning conditions and can be readily integrated in any volumetric neural
renderer.
Related papers
- GS-EVT: Cross-Modal Event Camera Tracking based on Gaussian Splatting [19.0745952177123]
This paper explores the use of event cameras for motion tracking.
It provides a solution with inherent robustness under difficult dynamics and illumination.
It tracks a map representation that comes directly from frame-based cameras.
arXiv Detail & Related papers (2024-09-28T03:56:39Z) - Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - Gaussian-SLAM: Photo-realistic Dense SLAM with Gaussian Splatting [24.160436463991495]
We present a dense simultaneous localization and mapping (SLAM) method that uses 3D Gaussians as a scene representation.
Our approach enables interactive-time reconstruction and photo-realistic rendering from real-world single-camera RGBD videos.
arXiv Detail & Related papers (2023-12-06T10:47:53Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.