End-to-End Lidar-Camera Self-Calibration for Autonomous Vehicles
- URL: http://arxiv.org/abs/2304.12412v2
- Date: Fri, 28 Apr 2023 01:12:36 GMT
- Title: End-to-End Lidar-Camera Self-Calibration for Autonomous Vehicles
- Authors: Arya Rachman, J\"urgen Seiler, and Andr\'e Kaup
- Abstract summary: CaLiCa is an end-to-end self-calibration network for Lidar and pinhole cameras.
We achieve 0.154 deg and 0.059 m accuracy with a reprojection error of 0.028 pixel with a single-pass inference.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Autonomous vehicles are equipped with a multi-modal sensor setup to enable
the car to drive safely. The initial calibration of such perception sensors is
a highly matured topic and is routinely done in an automated factory
environment. However, an intriguing question arises on how to maintain the
calibration quality throughout the vehicle's operating duration. Another
challenge is to calibrate multiple sensors jointly to ensure no propagation of
systemic errors. In this paper, we propose CaLiCa, an end-to-end deep
self-calibration network which addresses the automatic calibration problem for
pinhole camera and Lidar. We jointly predict the camera intrinsic parameters
(focal length and distortion) as well as Lidar-Camera extrinsic parameters
(rotation and translation), by regressing feature correlation between the
camera image and the Lidar point cloud. The network is arranged in a
Siamese-twin structure to constrain the network features learning to a mutually
shared feature in both point cloud and camera (Lidar-camera constraint).
Evaluation using KITTI datasets shows that we achieve 0.154 {\deg} and 0.059 m
accuracy with a reprojection error of 0.028 pixel with a single-pass inference.
We also provide an ablative study of how our end-to-end learning architecture
offers lower terminal loss (21% decrease in rotation loss) compared to isolated
calibration
Related papers
- Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR [7.906477322731106]
An accurate extrinsic calibration is required to fuse the camera and LiDAR data into a common spatial reference frame required by high-level perception functions.
There is a need for continuous online extrinsic calibration algorithms which can automatically update the value of the camera-LiDAR calibration during the life of the vehicle using only sensor data.
We propose using mutual information between the camera image's depth estimate, provided by commonly available monocular depth estimation networks, and the LiDAR pointcloud's geometric distance as a optimization metric for extrinsic calibration.
arXiv Detail & Related papers (2023-06-22T23:16:31Z) - Automated Static Camera Calibration with Intelligent Vehicles [58.908194559319405]
We present a robust calibration method for automated geo-referenced camera calibration.
Our method requires a calibration vehicle equipped with a combined filtering/RTK receiver and an inertial measurement unit (IMU) for self-localization.
Our method does not require any human interaction with the information recorded by both the infrastructure and the vehicle.
arXiv Detail & Related papers (2023-04-21T08:50:52Z) - SceneCalib: Automatic Targetless Calibration of Cameras and Lidars in
Autonomous Driving [10.517099201352414]
SceneCalib is a novel method for simultaneous self-calibration of extrinsic and intrinsic parameters in a system containing multiple cameras and a lidar sensor.
We resolve issues with a fully automatic method that requires no explicit correspondences between camera images and lidar point clouds.
arXiv Detail & Related papers (2023-04-11T23:02:16Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Unsupervised Depth Completion with Calibrated Backprojection Layers [79.35651668390496]
We propose a deep neural network architecture to infer dense depth from an image and a sparse point cloud.
It is trained using a video stream and corresponding synchronized sparse point cloud, as obtained from a LIDAR or other range sensor, along with the intrinsic calibration parameters of the camera.
At inference time, the calibration of the camera, which can be different from the one used for training, is fed as an input to the network along with the sparse point cloud and a single image.
arXiv Detail & Related papers (2021-08-24T05:41:59Z) - Lidar and Camera Self-Calibration using CostVolume Network [3.793450497896671]
Instead of regressing the parameters between LiDAR and camera directly, we predict the decalibrated deviation from initial calibration to the ground truth.
Our approach outperforms CNN-based state-of-the-art methods in terms of a mean absolute calibration error of 0.297cm in translation and 0.017deg in rotation with miscalibration magnitudes of up to 1.5m and 20deg.
arXiv Detail & Related papers (2020-12-27T09:41:45Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.