CPO: Change Robust Panorama to Point Cloud Localization
- URL: http://arxiv.org/abs/2207.05317v2
- Date: Fri, 2 Feb 2024 04:46:34 GMT
- Title: CPO: Change Robust Panorama to Point Cloud Localization
- Authors: Junho Kim, Hojun Jang, Changwoon Choi, and Young Min Kim
- Abstract summary: We present CPO, a robust algorithm that localizes a 2D panorama with respect to a 3D point cloud of a scene possibly containing changes.
CPO is lightweight and achieves effective localization in all tested scenarios.
- Score: 20.567452635590946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present CPO, a fast and robust algorithm that localizes a 2D panorama with
respect to a 3D point cloud of a scene possibly containing changes. To robustly
handle scene changes, our approach deviates from conventional feature point
matching, and focuses on the spatial context provided from panorama images.
Specifically, we propose efficient color histogram generation and subsequent
robust localization using score maps. By utilizing the unique equivariance of
spherical projections, we propose very fast color histogram generation for a
large number of camera poses without explicitly rendering images for all
candidate poses. We accumulate the regional consistency of the panorama and
point cloud as 2D/3D score maps, and use them to weigh the input color values
to further increase robustness. The weighted color distribution quickly finds
good initial poses and achieves stable convergence for gradient-based
optimization. CPO is lightweight and achieves effective localization in all
tested scenarios, showing stable performance despite scene changes, repetitive
structures, or featureless regions, which are typical challenges for visual
localization with perspective cameras. Code is available at
\url{https://github.com/82magnolia/panoramic-localization/}.
Related papers
- SplatLoc: 3D Gaussian Splatting-based Visual Localization for Augmented Reality [50.179377002092416]
We propose an efficient visual localization method capable of high-quality rendering with fewer parameters.
Our method achieves superior or comparable rendering and localization performance to state-of-the-art implicit-based visual localization approaches.
arXiv Detail & Related papers (2024-09-21T08:46:16Z) - FaVoR: Features via Voxel Rendering for Camera Relocalization [23.7893950095252]
Camera relocalization methods range from dense image alignment to direct camera pose regression from a query image.
We propose a novel approach that leverages a globally sparse yet locally dense 3D representation of 2D features.
By tracking and triangulating landmarks over a sequence of frames, we construct a sparse voxel map optimized to render image patch descriptors observed during tracking.
arXiv Detail & Related papers (2024-09-11T18:58:16Z) - Learning to Produce Semi-dense Correspondences for Visual Localization [11.415451542216559]
This study addresses the challenge of performing visual localization in demanding conditions such as night-time scenarios, adverse weather, and seasonal changes.
We propose a novel method that extracts reliable semi-dense 2D-3D matching points based on dense keypoint matches.
The network utilizes both geometric and visual cues to effectively infer 3D coordinates for unobserved keypoints from the observed ones.
arXiv Detail & Related papers (2024-02-13T10:40:10Z) - FocusTune: Tuning Visual Localization through Focus-Guided Sampling [61.79440120153917]
FocusTune is a focus-guided sampling technique to improve the performance of visual localization algorithms.
We demonstrate that FocusTune both improves or matches state-of-the-art performance whilst keeping ACE's appealing low storage and compute requirements.
This combination of high performance and low compute and storage requirements is particularly promising for applications in areas like mobile robotics and augmented reality.
arXiv Detail & Related papers (2023-11-06T04:58:47Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Optimizing Fiducial Marker Placement for Improved Visual Localization [24.614588477086503]
This paper explores the problem of automatic marker placement within a scene.
We compute optimized marker positions within the scene that can improve accuracy in visual localization.
We present optimized marker placement (OMP), a greedy algorithm that is based on the camera localizability framework.
arXiv Detail & Related papers (2022-11-02T23:18:14Z) - MeshLoc: Mesh-Based Visual Localization [54.731309449883284]
We explore a more flexible alternative based on dense 3D meshes that does not require features matching between database images to build the scene representation.
Surprisingly competitive results can be obtained when extracting features on renderings of these meshes, without any neural rendering stage.
Our results show that dense 3D model-based representations are a promising alternative to existing representations and point to interesting and challenging directions for future research.
arXiv Detail & Related papers (2022-07-21T21:21:10Z) - Pixel-Perfect Structure-from-Motion with Featuremetric Refinement [96.73365545609191]
We refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views.
This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors.
Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale.
arXiv Detail & Related papers (2021-08-18T17:58:55Z) - PICCOLO: Point Cloud-Centric Omnidirectional Localization [20.567452635590943]
We present PICCOLO, a simple and efficient algorithm for omnidirectional localization.
Our pipeline works in an off-the-shelf manner with a single image given as a query.
PICCOLO outperforms existing omnidirectional localization algorithms in both accuracy and stability when evaluated in various environments.
arXiv Detail & Related papers (2021-08-14T14:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.