JVLDLoc: a Joint Optimization of Visual-LiDAR Constraints and Direction
Priors for Localization in Driving Scenario
- URL: http://arxiv.org/abs/2208.09777v1
- Date: Sun, 21 Aug 2022 01:50:31 GMT
- Title: JVLDLoc: a Joint Optimization of Visual-LiDAR Constraints and Direction
Priors for Localization in Driving Scenario
- Authors: Longrui Dong and Gang Zeng
- Abstract summary: We propose a scheme that fuses map prior and vanishing points from images, which can establish an energy term that is only constrained on rotation.
We embed these direction priors into a visual-LiDAR SLAM system that integrates camera and LiDAR measurements in a tightly-coupled way at backend.
Experiments on KITTI, KITTI-360 and Oxford Radar Robotcar show that we achieve lower localization error or Absolute Pose Error (APE) than prior map, which validates our method is effective.
- Score: 13.439456870837029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability for a moving agent to localize itself in environment is the basic
demand for emerging applications, such as autonomous driving, etc. Many
existing methods based on multiple sensors still suffer from drift. We propose
a scheme that fuses map prior and vanishing points from images, which can
establish an energy term that is only constrained on rotation, called the
direction projection error. Then we embed these direction priors into a
visual-LiDAR SLAM system that integrates camera and LiDAR measurements in a
tightly-coupled way at backend. Specifically, our method generates visual
reprojection error and point to Implicit Moving Least Square(IMLS) surface of
scan constraints, and solves them jointly along with direction projection error
at global optimization. Experiments on KITTI, KITTI-360 and Oxford Radar
Robotcar show that we achieve lower localization error or Absolute Pose Error
(APE) than prior map, which validates our method is effective.
Related papers
- ROLO-SLAM: Rotation-Optimized LiDAR-Only SLAM in Uneven Terrain with Ground Vehicle [49.61982102900982]
A LiDAR-based SLAM method is presented to improve the accuracy of pose estimations for ground vehicles in rough terrains.
A global-scale factor graph is established to aid in the reduction of cumulative errors.
The results demonstrate that ROLO-SLAM excels in pose estimation of ground vehicles and outperforms existing state-of-the-art LiDAR SLAM frameworks.
arXiv Detail & Related papers (2025-01-04T02:44:27Z) - GLACE: Global Local Accelerated Coordinate Encoding [66.87005863868181]
Scene coordinate regression methods are effective in small-scale scenes but face significant challenges in large-scale scenes.
We propose GLACE, which integrates pre-trained global and local encodings and enables SCR to scale to large scenes with only a single small-sized network.
Our method achieves state-of-the-art results on large-scale scenes with a low-map-size model.
arXiv Detail & Related papers (2024-06-06T17:59:50Z) - SLAIM: Robust Dense Neural SLAM for Online Tracking and Mapping [15.63276368052395]
We propose a novel coarse-to-fine tracking model tailored for Neural Radiance Field SLAM (NeRF-SLAM)
Existing NeRF-SLAM systems consistently exhibit inferior tracking performance compared to traditional SLAM algorithms.
We implement both local and global bundle-adjustment to produce a robust (coarse-to-fine) and accurate (KL regularizer) SLAM solution.
arXiv Detail & Related papers (2024-04-17T14:23:28Z) - Vanishing Point Estimation in Uncalibrated Images with Prior Gravity
Direction [82.72686460985297]
We tackle the problem of estimating a Manhattan frame.
We derive two new 2-line solvers, one of which does not suffer from singularities affecting existing solvers.
We also design a new non-minimal method, running on an arbitrary number of lines, to boost the performance in local optimization.
arXiv Detail & Related papers (2023-08-21T13:03:25Z) - Tightly-Coupled LiDAR-Visual SLAM Based on Geometric Features for Mobile
Agents [43.137917788594926]
We propose a tightly-coupled LiDAR-visual SLAM based on geometric features.
The entire line segment detected by the visual subsystem overcomes the limitation of the LiDAR subsystem.
Our system achieves more accurate and robust pose estimation compared to current state-of-the-art multi-modal methods.
arXiv Detail & Related papers (2023-07-15T10:06:43Z) - LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse
Inertial and LiDAR Sensors [38.60837840737258]
We propose a multi-sensor fusion method for capturing 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios.
We design a two-stage pose estimator in a coarse-to-fine manner, where point clouds provide the coarse body shape and IMU measurements optimize the local actions.
We collect a LiDAR-IMU multi-modal mocap dataset, LIPD, with diverse human actions in long-range scenarios.
arXiv Detail & Related papers (2022-05-30T20:15:11Z) - Event-aided Direct Sparse Odometry [54.602311491827805]
We introduce EDS, a direct monocular visual odometry using events and frames.
Our algorithm leverages the event generation model to track the camera motion in the blind time between frames.
EDS is the first method to perform 6-DOF VO using events and frames with a direct approach.
arXiv Detail & Related papers (2022-04-15T20:40:29Z) - Progressive Coordinate Transforms for Monocular 3D Object Detection [52.00071336733109]
We propose a novel and lightweight approach, dubbed em Progressive Coordinate Transforms (PCT) to facilitate learning coordinate representations.
In this paper, we propose a novel and lightweight approach, dubbed em Progressive Coordinate Transforms (PCT) to facilitate learning coordinate representations.
arXiv Detail & Related papers (2021-08-12T15:22:33Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z) - Graph-based Proprioceptive Localization Using a Discrete Heading-Length
Feature Sequence Matching Approach [14.356113113268389]
Proprioceptive localization refers to a new class of robot egocentric localization methods.
These methods are naturally immune to bad weather, poor lighting conditions, or other extreme environmental conditions.
We provide a low cost fallback solution for localization under challenging environmental conditions.
arXiv Detail & Related papers (2020-05-27T23:10:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.