Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups
- URL: http://arxiv.org/abs/2101.04431v1
- Date: Tue, 12 Jan 2021 12:02:26 GMT
- Title: Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups
- Authors: Jorge Beltr\'an, Carlos Guindel, Fernando Garc\'ia
- Abstract summary: We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most sensor setups for onboard autonomous perception are composed of LiDARs
and vision systems, as they provide complementary information that improves the
reliability of the different algorithms necessary to obtain a robust scene
understanding. However, the effective use of information from different sources
requires an accurate calibration between the sensors involved, which usually
implies a tedious and burdensome process. We present a method to calibrate the
extrinsic parameters of any pair of sensors involving LiDARs, monocular or
stereo cameras, of the same or different modalities. The procedure is composed
of two stages: first, reference points belonging to a custom calibration target
are extracted from the data provided by the sensors to be calibrated, and
second, the optimal rigid transformation is found through the registration of
both point sets. The proposed approach can handle devices with very different
resolutions and poses, as usually found in vehicle setups. In order to assess
the performance of the proposed method, a novel evaluation suite built on top
of a popular simulation framework is introduced. Experiments on the synthetic
environment show that our calibration algorithm significantly outperforms
existing methods, whereas real data tests corroborate the results obtained in
the evaluation suite. Open-source code is available at
https://github.com/beltransen/velo2cam_calibration
Related papers
- YOCO: You Only Calibrate Once for Accurate Extrinsic Parameter in LiDAR-Camera Systems [0.5999777817331317]
In a multi-sensor fusion system composed of cameras and LiDAR, precise extrinsic calibration contributes to the system's long-term stability and accurate perception of the environment.
This paper proposes a novel fully automatic extrinsic calibration method for LiDAR-camera systems that circumvents the need for corresponding point registration.
arXiv Detail & Related papers (2024-07-25T13:44:49Z) - SOAC: Spatio-Temporal Overlap-Aware Multi-Sensor Calibration using Neural Radiance Fields [10.958143040692141]
In rapidly-evolving domains such as autonomous driving, the use of multiple sensors with different modalities is crucial to ensure operational precision and stability.
To correctly exploit the provided information by each sensor in a single common frame, it is essential for these sensors to be accurately calibrated.
We leverage the ability of Neural Radiance Fields to represent different modalities in a common representation.
arXiv Detail & Related papers (2023-11-27T13:25:47Z) - TrajMatch: Towards Automatic Spatio-temporal Calibration for Roadside
LiDARs through Trajectory Matching [12.980324010888664]
We propose TrajMatch -- the first system that can automatically calibrate for roadside LiDARs in both time and space.
Experiment results show that TrajMatch can achieve a spatial calibration error of less than 10cm and a temporal calibration error of less than 1.5ms.
arXiv Detail & Related papers (2023-02-04T12:27:01Z) - SST-Calib: Simultaneous Spatial-Temporal Parameter Calibration between
LIDAR and Camera [26.59231069298659]
A segmentation-based framework is proposed to jointly estimate the geometrical and temporal parameters in the calibration of a camera-LIDAR suite.
The proposed algorithm is tested on the KITTI dataset, and the result shows an accurate real-time calibration of both geometric and temporal parameters.
arXiv Detail & Related papers (2022-07-08T06:21:52Z) - CROON: Automatic Multi-LiDAR Calibration and Refinement Method in Road
Scene [15.054452813705112]
CROON (automatiC multi-LiDAR CalibratiOn and Refinement method in rOad sceNe) is a two-stage method including rough and refinement calibration.
Results on real-world and simulated data sets demonstrate the reliability and accuracy of our method.
arXiv Detail & Related papers (2022-03-07T07:36:31Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z) - EHSOD: CAM-Guided End-to-end Hybrid-Supervised Object Detection with
Cascade Refinement [53.69674636044927]
We present EHSOD, an end-to-end hybrid-supervised object detection system.
It can be trained in one shot on both fully and weakly-annotated data.
It achieves comparable results on multiple object detection benchmarks with only 30% fully-annotated data.
arXiv Detail & Related papers (2020-02-18T08:04:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.