TEScalib: Targetless Extrinsic Self-Calibration of LiDAR and Stereo
Camera for Automated Driving Vehicles with Uncertainty Analysis
- URL: http://arxiv.org/abs/2202.13847v1
- Date: Mon, 28 Feb 2022 15:04:00 GMT
- Title: TEScalib: Targetless Extrinsic Self-Calibration of LiDAR and Stereo
Camera for Automated Driving Vehicles with Uncertainty Analysis
- Authors: Haohao Hu, Fengze Han, Frank Bieder, Jan-Hendrik Pauls and Christoph
Stiller
- Abstract summary: TEScalib is a novel extrinsic self-calibration approach of LiDAR and stereo camera.
It uses the geometric and photometric information of surrounding environments without any calibration targets for automated driving vehicles.
Our approach evaluated on the KITTI dataset achieves very promising results.
- Score: 4.616329048951671
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present TEScalib, a novel extrinsic self-calibration
approach of LiDAR and stereo camera using the geometric and photometric
information of surrounding environments without any calibration targets for
automated driving vehicles. Since LiDAR and stereo camera are widely used for
sensor data fusion on automated driving vehicles, their extrinsic calibration
is highly important. However, most of the LiDAR and stereo camera calibration
approaches are mainly target-based and therefore time consuming. Even the newly
developed targetless approaches in last years are either inaccurate or
unsuitable for driving platforms.
To address those problems, we introduce TEScalib. By applying a 3D mesh
reconstruction-based point cloud registration, the geometric information is
used to estimate the LiDAR to stereo camera extrinsic parameters accurately and
robustly. To calibrate the stereo camera, a photometric error function is
builded and the LiDAR depth is involved to transform key points from one camera
to another. During driving, these two parts are processed iteratively. Besides
that, we also propose an uncertainty analysis for reflecting the reliability of
the estimated extrinsic parameters. Our TEScalib approach evaluated on the
KITTI dataset achieves very promising results.
Related papers
- CalibFormer: A Transformer-based Automatic LiDAR-Camera Calibration Network [11.602943913324653]
CalibFormer is an end-to-end network for automatic LiDAR-camera calibration.
We aggregate multiple layers of camera and LiDAR image features to achieve high-resolution representations.
Our method achieved a mean translation error of $0.8751 mathrmcm$ and a mean rotation error of $0.0562 circ$ on the KITTI dataset.
arXiv Detail & Related papers (2023-11-26T08:59:30Z) - From Chaos to Calibration: A Geometric Mutual Information Approach to
Target-Free Camera LiDAR Extrinsic Calibration [4.378156825150505]
We propose a target free extrinsic calibration algorithm that requires no ground truth training data.
We demonstrate our proposed improvement using the KITTI and KITTI-360 fisheye data set.
arXiv Detail & Related papers (2023-11-03T13:30:31Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR [7.906477322731106]
An accurate extrinsic calibration is required to fuse the camera and LiDAR data into a common spatial reference frame required by high-level perception functions.
There is a need for continuous online extrinsic calibration algorithms which can automatically update the value of the camera-LiDAR calibration during the life of the vehicle using only sensor data.
We propose using mutual information between the camera image's depth estimate, provided by commonly available monocular depth estimation networks, and the LiDAR pointcloud's geometric distance as a optimization metric for extrinsic calibration.
arXiv Detail & Related papers (2023-06-22T23:16:31Z) - Online LiDAR-Camera Extrinsic Parameters Self-checking [12.067216966113708]
This paper proposes a self-checking algorithm to judge whether the extrinsic parameters are well-calibrated by introducing a binary classification network.
The code is open-sourced on the Github website at https://github.com/OpenCalib/LiDAR2camera_self-check.
arXiv Detail & Related papers (2022-10-19T13:17:48Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - CRLF: Automatic Calibration and Refinement based on Line Feature for
LiDAR and Camera in Road Scenes [16.201111055979453]
We propose a novel method to calibrate the extrinsic parameter for LiDAR and camera in road scenes.
Our method introduces line features from static straight-line-shaped objects such as road lanes and poles in both image and point cloud.
We conduct extensive experiments on KITTI and our in-house dataset, quantitative and qualitative results demonstrate the robustness and accuracy of our method.
arXiv Detail & Related papers (2021-03-08T06:02:44Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.