Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line
Correspondences
- URL: http://arxiv.org/abs/2004.00740v2
- Date: Fri, 31 Jul 2020 17:22:35 GMT
- Title: Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line
Correspondences
- Authors: Huai Yu, Weikun Zhen, Wen Yang, Ji Zhang, Sebastian Scherer
- Abstract summary: We propose an efficient monocular camera localization method in prior LiDAR maps using direct 2D-3D line correspondences.
With the pose prediction from VIO, we can efficiently obtain coarse 2D-3D line correspondences.
The proposed method can efficiently estimate camera poses without accumulated drifts or pose jumps in structured environments.
- Score: 16.34334330572825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light-weight camera localization in existing maps is essential for
vision-based navigation. Currently, visual and visual-inertial odometry
(VO\&VIO) techniques are well-developed for state estimation but with
inevitable accumulated drifts and pose jumps upon loop closure. To overcome
these problems, we propose an efficient monocular camera localization method in
prior LiDAR maps using direct 2D-3D line correspondences. To handle the
appearance differences and modality gaps between LiDAR point clouds and images,
geometric 3D lines are extracted offline from LiDAR maps while robust 2D lines
are extracted online from video sequences. With the pose prediction from VIO,
we can efficiently obtain coarse 2D-3D line correspondences. Then the camera
poses and 2D-3D correspondences are iteratively optimized by minimizing the
projection error of correspondences and rejecting outliers. Experimental
results on the EurocMav dataset and our collected dataset demonstrate that the
proposed method can efficiently estimate camera poses without accumulated
drifts or pose jumps in structured environments.
Related papers
- EP2P-Loc: End-to-End 3D Point to 2D Pixel Localization for Large-Scale
Visual Localization [44.05930316729542]
We propose EP2P-Loc, a novel large-scale visual localization method for 3D point clouds.
To increase the number of inliers, we propose a simple algorithm to remove invisible 3D points in the image.
For the first time in this task, we employ a differentiable for end-to-end training.
arXiv Detail & Related papers (2023-09-14T07:06:36Z) - Improving Feature-based Visual Localization by Geometry-Aided Matching [21.1967752160412]
We introduce a novel 2D-3D matching method, Geometry-Aided Matching (GAM), which uses both appearance information and geometric context to improve 2D-3D feature matching.
GAM can greatly strengthen the recall of 2D-3D matches while maintaining high precision.
Our proposed localization method achieves state-of-the-art results on multiple visual localization datasets.
arXiv Detail & Related papers (2022-11-16T07:02:12Z) - Multi-modal 3D Human Pose Estimation with 2D Weak Supervision in
Autonomous Driving [74.74519047735916]
3D human pose estimation (HPE) in autonomous vehicles (AV) differs from other use cases in many factors.
Data collected for other use cases (such as virtual reality, gaming, and animation) may not be usable for AV applications.
We propose one of the first approaches to alleviate this problem in the AV setting.
arXiv Detail & Related papers (2021-12-22T18:57:16Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - MetaPose: Fast 3D Pose from Multiple Views without 3D Supervision [72.5863451123577]
We show how to train a neural model that can perform accurate 3D pose and camera estimation.
Our method outperforms both classical bundle adjustment and weakly-supervised monocular 3D baselines.
arXiv Detail & Related papers (2021-08-10T18:39:56Z) - Lidar-Monocular Surface Reconstruction Using Line Segments [5.542669744873386]
We propose to leverage common geometric features that are detected in both the LIDAR scans and image data, allowing data from the two sensors to be processed in a higher-level space.
We show that our method delivers results that are comparable to a state-of-the-art LIDAR survey while not requiring highly accurate ground truth pose estimates.
arXiv Detail & Related papers (2021-04-06T19:49:53Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - Learning 2D-3D Correspondences To Solve The Blind Perspective-n-Point
Problem [98.92148855291363]
This paper proposes a deep CNN model which simultaneously solves for both 6-DoF absolute camera pose 2D--3D correspondences.
Tests on both real and simulated data have shown that our method substantially outperforms existing approaches.
arXiv Detail & Related papers (2020-03-15T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.