PICCOLO: Point Cloud-Centric Omnidirectional Localization
- URL: http://arxiv.org/abs/2108.06545v3
- Date: Fri, 2 Feb 2024 05:13:52 GMT
- Title: PICCOLO: Point Cloud-Centric Omnidirectional Localization
- Authors: Junho Kim, Changwoon Choi, Hojun Jang, and Young Min Kim
- Abstract summary: We present PICCOLO, a simple and efficient algorithm for omnidirectional localization.
Our pipeline works in an off-the-shelf manner with a single image given as a query.
PICCOLO outperforms existing omnidirectional localization algorithms in both accuracy and stability when evaluated in various environments.
- Score: 20.567452635590943
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present PICCOLO, a simple and efficient algorithm for omnidirectional
localization. Given a colored point cloud and a 360 panorama image of a scene,
our objective is to recover the camera pose at which the panorama image is
taken. Our pipeline works in an off-the-shelf manner with a single image given
as a query and does not require any training of neural networks or collecting
ground-truth poses of images. Instead, we match each point cloud color to the
holistic view of the panorama image with gradient-descent optimization to find
the camera pose. Our loss function, called sampling loss, is point
cloud-centric, evaluated at the projected location of every point in the point
cloud. In contrast, conventional photometric loss is image-centric, comparing
colors at each pixel location. With a simple change in the compared entities,
sampling loss effectively overcomes the severe visual distortion of
omnidirectional images, and enjoys the global context of the 360 view to handle
challenging scenarios for visual localization. PICCOLO outperforms existing
omnidirectional localization algorithms in both accuracy and stability when
evaluated in various environments. Code is available at
\url{https://github.com/82magnolia/panoramic-localization/}.
Related papers
- OmniColor: A Global Camera Pose Optimization Approach of LiDAR-360Camera Fusion for Colorizing Point Clouds [15.11376768491973]
A Colored point cloud, as a simple and efficient 3D representation, has many advantages in various fields.
This paper presents OmniColor, a novel and efficient algorithm to colorize point clouds using an independent 360-degree camera.
arXiv Detail & Related papers (2024-04-06T17:41:36Z) - Deep Single Image Camera Calibration by Heatmap Regression to Recover Fisheye Images Under Manhattan World Assumption [9.018416031676136]
A Manhattan world lying along cuboid buildings is useful for camera angle estimation.
We propose a learning-based calibration method that uses heatmap regression to detect the directions of labeled image coordinates.
Our method outperforms conventional methods on large-scale datasets and with off-the-shelf cameras.
arXiv Detail & Related papers (2023-03-30T05:57:59Z) - MeshLoc: Mesh-Based Visual Localization [54.731309449883284]
We explore a more flexible alternative based on dense 3D meshes that does not require features matching between database images to build the scene representation.
Surprisingly competitive results can be obtained when extracting features on renderings of these meshes, without any neural rendering stage.
Our results show that dense 3D model-based representations are a promising alternative to existing representations and point to interesting and challenging directions for future research.
arXiv Detail & Related papers (2022-07-21T21:21:10Z) - CPO: Change Robust Panorama to Point Cloud Localization [20.567452635590946]
We present CPO, a robust algorithm that localizes a 2D panorama with respect to a 3D point cloud of a scene possibly containing changes.
CPO is lightweight and achieves effective localization in all tested scenarios.
arXiv Detail & Related papers (2022-07-12T05:10:32Z) - ADOP: Approximate Differentiable One-Pixel Point Rendering [7.69748487650283]
We present a point-based, differentiable neural rendering pipeline for scene refinement and novel view synthesis.
We show that our system is able to synthesize sharper and more consistent novel views than existing approaches.
arXiv Detail & Related papers (2021-10-13T10:55:39Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - How Privacy-Preserving are Line Clouds? Recovering Scene Details from 3D
Lines [49.06411148698547]
This paper shows that a significant amount of information about the 3D scene geometry is preserved in line clouds.
Our approach is based on the observation that the closest points between lines can yield a good approximation to the original 3D points.
arXiv Detail & Related papers (2021-03-08T21:32:43Z) - Robust Place Recognition using an Imaging Lidar [45.37172889338924]
We propose a methodology for robust, real-time place recognition using an imaging lidar.
Our method is truly-invariant and can tackle reverse revisiting and upside-down revisiting.
arXiv Detail & Related papers (2021-03-03T01:08:31Z) - Inter-Image Communication for Weakly Supervised Localization [77.2171924626778]
Weakly supervised localization aims at finding target object regions using only image-level supervision.
We propose to leverage pixel-level similarities across different objects for learning more accurate object locations.
Our method achieves the Top-1 localization error rate of 45.17% on the ILSVRC validation set.
arXiv Detail & Related papers (2020-08-12T04:14:11Z) - Perspective Plane Program Induction from a Single Image [85.28956922100305]
We study the inverse graphics problem of inferring a holistic representation for natural images.
We formulate this problem as jointly finding the camera pose and scene structure that best describe the input image.
Our proposed framework, Perspective Plane Program Induction (P3I), combines search-based and gradient-based algorithms to efficiently solve the problem.
arXiv Detail & Related papers (2020-06-25T21:18:58Z) - Multi-View Optimization of Local Feature Geometry [70.18863787469805]
We address the problem of refining the geometry of local image features from multiple views without known scene or camera geometry.
Our proposed method naturally complements the traditional feature extraction and matching paradigm.
We show that our method consistently improves the triangulation and camera localization performance for both hand-crafted and learned local features.
arXiv Detail & Related papers (2020-03-18T17:22:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.