LCE-Calib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With
A Globally Optimal Solution
- URL: http://arxiv.org/abs/2303.09825v1
- Date: Fri, 17 Mar 2023 08:07:56 GMT
- Title: LCE-Calib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With
A Globally Optimal Solution
- Authors: Jianhao Jiao, Feiyi Chen, Hexiang Wei, Jin Wu, Ming Liu
- Abstract summary: The combination of LiDARs and cameras enables a mobile robot to perceive environments with multi-modal data.
Traditional frame cameras are sensitive to changing illumination conditions, motivating us to introduce novel event cameras.
This paper proposes an automatic checkerboard-based approach to calibrate extrinsics between a LiDAR and a frame/event camera.
- Score: 10.117923901732743
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The combination of LiDARs and cameras enables a mobile robot to perceive
environments with multi-modal data, becoming a key factor in achieving robust
perception. Traditional frame cameras are sensitive to changing illumination
conditions, motivating us to introduce novel event cameras to make LiDAR-camera
fusion more complete and robust. However, to jointly exploit these sensors, the
challenging extrinsic calibration problem should be addressed. This paper
proposes an automatic checkerboard-based approach to calibrate extrinsics
between a LiDAR and a frame/event camera, where four contributions are
presented. Firstly, we present an automatic feature extraction and checkerboard
tracking method from LiDAR's point clouds. Secondly, we reconstruct realistic
frame images from event streams, applying traditional corner detectors to event
cameras. Thirdly, we propose an initialization-refinement procedure to estimate
extrinsics using point-to-plane and point-to-line constraints in a
coarse-to-fine manner. Fourthly, we introduce a unified and globally optimal
solution to address two optimization problems in calibration. Our approach has
been validated with extensive experiments on 19 simulated and real-world
datasets and outperforms the state-of-the-art.
Related papers
- YOCO: You Only Calibrate Once for Accurate Extrinsic Parameter in LiDAR-Camera Systems [0.5999777817331317]
In a multi-sensor fusion system composed of cameras and LiDAR, precise extrinsic calibration contributes to the system's long-term stability and accurate perception of the environment.
This paper proposes a novel fully automatic extrinsic calibration method for LiDAR-camera systems that circumvents the need for corresponding point registration.
arXiv Detail & Related papers (2024-07-25T13:44:49Z) - CMRNext: Camera to LiDAR Matching in the Wild for Localization and Extrinsic Calibration [9.693729708337125]
CMRNext is a novel approach for camera-LIDAR matching that is independent of sensor-specific parameters, generalizable, and can be used in the wild.
We extensively evaluate CMRNext on six different robotic platforms, including three publicly available datasets and three in-house robots.
arXiv Detail & Related papers (2024-01-31T19:14:12Z) - From Chaos to Calibration: A Geometric Mutual Information Approach to
Target-Free Camera LiDAR Extrinsic Calibration [4.378156825150505]
We propose a target free extrinsic calibration algorithm that requires no ground truth training data.
We demonstrate our proposed improvement using the KITTI and KITTI-360 fisheye data set.
arXiv Detail & Related papers (2023-11-03T13:30:31Z) - Lasers to Events: Automatic Extrinsic Calibration of Lidars and Event
Cameras [67.84498757689776]
This paper presents the first direct calibration method between event cameras and lidars.
It removes dependencies on frame-based camera intermediaries and/or highly-accurate hand measurements.
arXiv Detail & Related papers (2022-07-03T11:05:45Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - LIF-Seg: LiDAR and Camera Image Fusion for 3D LiDAR Semantic
Segmentation [78.74202673902303]
We propose a coarse-tofine LiDAR and camera fusion-based network (termed as LIF-Seg) for LiDAR segmentation.
The proposed method fully utilizes the contextual information of images and introduces a simple but effective early-fusion strategy.
The cooperation of these two components leads to the success of the effective camera-LiDAR fusion.
arXiv Detail & Related papers (2021-08-17T08:53:11Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - CRLF: Automatic Calibration and Refinement based on Line Feature for
LiDAR and Camera in Road Scenes [16.201111055979453]
We propose a novel method to calibrate the extrinsic parameter for LiDAR and camera in road scenes.
Our method introduces line features from static straight-line-shaped objects such as road lanes and poles in both image and point cloud.
We conduct extensive experiments on KITTI and our in-house dataset, quantitative and qualitative results demonstrate the robustness and accuracy of our method.
arXiv Detail & Related papers (2021-03-08T06:02:44Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.