Beyond Controlled Environments: 3D Camera Re-Localization in Changing
Indoor Scenes
- URL: http://arxiv.org/abs/2008.02004v1
- Date: Wed, 5 Aug 2020 09:02:12 GMT
- Title: Beyond Controlled Environments: 3D Camera Re-Localization in Changing
Indoor Scenes
- Authors: Johanna Wald, Torsten Sattler, Stuart Golodetz, Tommaso Cavallari,
Federico Tombari
- Abstract summary: Long-term camera re-localization is an important task with numerous computer vision and robotics applications.
We create RIO10, a new long-term camera re-localization benchmark focused on indoor scenes.
- Score: 74.9814252247282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-term camera re-localization is an important task with numerous computer
vision and robotics applications. Whilst various outdoor benchmarks exist that
target lighting, weather and seasonal changes, far less attention has been paid
to appearance changes that occur indoors. This has led to a mismatch between
popular indoor benchmarks, which focus on static scenes, and indoor
environments that are of interest for many real-world applications. In this
paper, we adapt 3RScan - a recently introduced indoor RGB-D dataset designed
for object instance re-localization - to create RIO10, a new long-term camera
re-localization benchmark focused on indoor scenes. We propose new metrics for
evaluating camera re-localization and explore how state-of-the-art camera
re-localizers perform according to these metrics. We also examine in detail how
different types of scene change affect the performance of different methods,
based on novel ways of detecting such changes in a given RGB-D frame. Our
results clearly show that long-term indoor re-localization is an unsolved
problem. Our benchmark and tools are publicly available at
waldjohannau.github.io/RIO10
Related papers
- Monocular Occupancy Prediction for Scalable Indoor Scenes [56.686307396496545]
We propose a novel method, named ISO, for predicting indoor scene occupancy using monocular images.
ISO harnesses the advantages of a pretrained depth model to achieve accurate depth predictions.
We also introduce Occ-ScanNet, a large-scale occupancy benchmark for indoor scenes.
arXiv Detail & Related papers (2024-07-16T13:50:40Z) - Scene Coordinate Reconstruction: Posing of Image Collections via Incremental Learning of a Relocalizer [21.832249148699397]
We address the task of estimating camera parameters from a set of images depicting a scene.
We show that scene coordinate regression, a learning-based relocalization approach, allows us to build implicit, neural scene representations from unposed images.
arXiv Detail & Related papers (2024-04-22T17:02:33Z) - Lazy Visual Localization via Motion Averaging [89.8709956317671]
We show that it is possible to achieve high localization accuracy without reconstructing the scene from the database.
Experiments show that our visual localization proposal, LazyLoc, achieves comparable performance against state-of-the-art structure-based methods.
arXiv Detail & Related papers (2023-07-19T13:40:45Z) - Map-free Visual Relocalization: Metric Pose Relative to a Single Image [21.28513803531557]
We propose Map-free Relocalization, using only one photo of a scene to enable instant, metric scaled relocalization.
Existing datasets are not suitable to benchmark map-free relocalization, due to their focus on large scenes or their limited variability.
We have constructed a new dataset of 655 small places of interest, such as sculptures, murals and fountains, collected worldwide.
arXiv Detail & Related papers (2022-10-11T14:49:49Z) - Visual Localization via Few-Shot Scene Region Classification [84.34083435501094]
Visual (re)localization addresses the problem of estimating the 6-DoF camera pose of a query image captured in a known scene.
Recent advances in structure-based localization solve this problem by memorizing the mapping from image pixels to scene coordinates.
We propose a scene region classification approach to achieve fast and effective scene memorization with few-shot images.
arXiv Detail & Related papers (2022-08-14T22:39:02Z) - Robust Change Detection Based on Neural Descriptor Fields [53.111397800478294]
We develop an object-level online change detection approach that is robust to partially overlapping observations and noisy localization results.
By associating objects via shape code similarity and comparing local object-neighbor spatial layout, our proposed approach demonstrates robustness to low observation overlap and localization noises.
arXiv Detail & Related papers (2022-08-01T17:45:36Z) - VS-Net: Voting with Segmentation for Visual Localization [72.8165619061249]
We propose a novel visual localization framework that establishes 2D-to-3D correspondences between the query image and the 3D map with a series of learnable scene-specific landmarks.
Our proposed VS-Net is extensively tested on multiple public benchmarks and can outperform state-of-the-art visual localization methods.
arXiv Detail & Related papers (2021-05-23T08:44:11Z) - Large-scale Localization Datasets in Crowded Indoor Spaces [23.071409425965772]
We introduce 5 new indoor datasets for visual localization in challenging real-world environments.
They were captured in a large shopping mall and a large metro station in Seoul, South Korea.
In order to obtain accurate ground truth camera poses, we developed a robust LiDAR SLAM.
arXiv Detail & Related papers (2021-05-19T06:20:49Z) - Robust Neural Routing Through Space Partitions for Camera Relocalization
in Dynamic Indoor Environments [39.99342226556908]
Localizing the camera in a known indoor environment is a key building block for scene mapping, robot navigation, AR, etc.
Recent advances estimate the camera pose via optimization over the 2D/3D-3D correspondences established between the coordinates in 2D/3D camera space and 3D world space.
We propose a novel outlier-aware neural tree which bridges the two worlds, deep learning and decision tree approaches.
arXiv Detail & Related papers (2020-12-08T21:20:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.