EdgeCalib: Multi-Frame Weighted Edge Features for Automatic Targetless
LiDAR-Camera Calibration
- URL: http://arxiv.org/abs/2310.16629v1
- Date: Wed, 25 Oct 2023 13:27:56 GMT
- Title: EdgeCalib: Multi-Frame Weighted Edge Features for Automatic Targetless
LiDAR-Camera Calibration
- Authors: Xingchen Li, Yifan Duan, Beibei Wang, Haojie Ren, Guoliang You, Yu
Sheng, Jianmin Ji, Yanyong Zhang
- Abstract summary: We introduce an edge-based approach for automatic online calibration of LiDAR and cameras in real-world scenarios.
The edge features, which are prevalent in various environments, are aligned in both images and point clouds to determine the extrinsic parameters.
The results show a state-of-the-art rotation accuracy of 0.086deg and a translation accuracy of 0.977 cm, outperforming existing edge-based calibration methods in both precision and robustness.
- Score: 15.057994140880373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multimodal perception systems, achieving precise extrinsic calibration
between LiDAR and camera is of critical importance. Previous calibration
methods often required specific targets or manual adjustments, making them both
labor-intensive and costly. Online calibration methods based on features have
been proposed, but these methods encounter challenges such as imprecise feature
extraction, unreliable cross-modality associations, and high scene-specific
requirements. To address this, we introduce an edge-based approach for
automatic online calibration of LiDAR and cameras in real-world scenarios. The
edge features, which are prevalent in various environments, are aligned in both
images and point clouds to determine the extrinsic parameters. Specifically,
stable and robust image edge features are extracted using a SAM-based method
and the edge features extracted from the point cloud are weighted through a
multi-frame weighting strategy for feature filtering. Finally, accurate
extrinsic parameters are optimized based on edge correspondence constraints. We
conducted evaluations on both the KITTI dataset and our dataset. The results
show a state-of-the-art rotation accuracy of 0.086{\deg} and a translation
accuracy of 0.977 cm, outperforming existing edge-based calibration methods in
both precision and robustness.
Related papers
- Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - YOCO: You Only Calibrate Once for Accurate Extrinsic Parameter in LiDAR-Camera Systems [0.5999777817331317]
In a multi-sensor fusion system composed of cameras and LiDAR, precise extrinsic calibration contributes to the system's long-term stability and accurate perception of the environment.
This paper proposes a novel fully automatic extrinsic calibration method for LiDAR-camera systems that circumvents the need for corresponding point registration.
arXiv Detail & Related papers (2024-07-25T13:44:49Z) - CalibFormer: A Transformer-based Automatic LiDAR-Camera Calibration Network [11.602943913324653]
CalibFormer is an end-to-end network for automatic LiDAR-camera calibration.
We aggregate multiple layers of camera and LiDAR image features to achieve high-resolution representations.
Our method achieved a mean translation error of $0.8751 mathrmcm$ and a mean rotation error of $0.0562 circ$ on the KITTI dataset.
arXiv Detail & Related papers (2023-11-26T08:59:30Z) - P2O-Calib: Camera-LiDAR Calibration Using Point-Pair Spatial Occlusion
Relationship [1.6921147361216515]
We propose a novel target-less calibration approach based on the 2D-3D edge point extraction using the occlusion relationship in 3D space.
Our method achieves low error and high robustness that can contribute to the practical applications relying on high-quality Camera-LiDAR calibration.
arXiv Detail & Related papers (2023-11-04T14:32:55Z) - Vanishing Point Estimation in Uncalibrated Images with Prior Gravity
Direction [82.72686460985297]
We tackle the problem of estimating a Manhattan frame.
We derive two new 2-line solvers, one of which does not suffer from singularities affecting existing solvers.
We also design a new non-minimal method, running on an arbitrary number of lines, to boost the performance in local optimization.
arXiv Detail & Related papers (2023-08-21T13:03:25Z) - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter
Correction [54.00007868515432]
Existing methods face challenges in estimating the accurate correction field due to the uniform velocity assumption.
We propose a geometry-based Quadratic Rolling Shutter (QRS) motion solver, which precisely estimates the high-order correction field of individual pixels.
Our method surpasses the state-of-the-art by +4.98, +0.77, and +4.33 of PSNR on Carla-RS, Fastec-RS, and BS-RSC datasets, respectively.
arXiv Detail & Related papers (2023-03-31T15:09:18Z) - Learning-Based Framework for Camera Calibration with Distortion
Correction and High Precision Feature Detection [14.297068346634351]
We propose a hybrid camera calibration framework which combines learning-based approaches with traditional methods to handle these bottlenecks.
In particular, this framework leverages learning-based approaches to perform efficient distortion correction and robust chessboard corner coordinate encoding.
Compared with two widely-used camera calibration toolboxes, experiment results on both real and synthetic datasets manifest the better robustness and higher precision of the proposed framework.
arXiv Detail & Related papers (2022-02-01T00:19:18Z) - Uncertainty-Aware Camera Pose Estimation from Points and Lines [101.03675842534415]
Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
arXiv Detail & Related papers (2021-07-08T15:19:36Z) - CRLF: Automatic Calibration and Refinement based on Line Feature for
LiDAR and Camera in Road Scenes [16.201111055979453]
We propose a novel method to calibrate the extrinsic parameter for LiDAR and camera in road scenes.
Our method introduces line features from static straight-line-shaped objects such as road lanes and poles in both image and point cloud.
We conduct extensive experiments on KITTI and our in-house dataset, quantitative and qualitative results demonstrate the robustness and accuracy of our method.
arXiv Detail & Related papers (2021-03-08T06:02:44Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.