Online LiDAR-Camera Extrinsic Parameters Self-checking
- URL: http://arxiv.org/abs/2210.10537v2
- Date: Mon, 15 Jan 2024 01:44:53 GMT
- Title: Online LiDAR-Camera Extrinsic Parameters Self-checking
- Authors: Pengjin Wei, Guohang Yan, Yikang Li, Kun Fang, Jie Yang, Wei Liu
- Abstract summary: This paper proposes a self-checking algorithm to judge whether the extrinsic parameters are well-calibrated by introducing a binary classification network.
The code is open-sourced on the Github website at https://github.com/OpenCalib/LiDAR2camera_self-check.
- Score: 12.067216966113708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of neural networks and the increasing popularity of
automatic driving, the calibration of the LiDAR and the camera has attracted
more and more attention. This calibration task is multi-modal, where the rich
color and texture information captured by the camera and the accurate
three-dimensional spatial information from the LiDAR is incredibly significant
for downstream tasks. Current research interests mainly focus on obtaining
accurate calibration results through information fusion. However, they seldom
analyze whether the calibrated results are correct or not, which could be of
significant importance in real-world applications. For example, in large-scale
production, the LiDARs and the cameras of each smart car have to get
well-calibrated as the car leaves the production line, while in the rest of the
car life period, the poses of the LiDARs and cameras should also get
continually supervised to ensure the security. To this end, this paper proposes
a self-checking algorithm to judge whether the extrinsic parameters are
well-calibrated by introducing a binary classification network based on the
fused information from the camera and the LiDAR. Moreover, since there is no
such dataset for the task in this work, we further generate a new dataset
branch from the KITTI dataset tailored for the task. Our experiments on the
proposed dataset branch demonstrate the performance of our method. To the best
of our knowledge, this is the first work to address the significance of
continually checking the calibrated extrinsic parameters for autonomous
driving. The code is open-sourced on the Github website at
https://github.com/OpenCalib/LiDAR2camera_self-check.
Related papers
- CalibFormer: A Transformer-based Automatic LiDAR-Camera Calibration Network [11.602943913324653]
CalibFormer is an end-to-end network for automatic LiDAR-camera calibration.
We aggregate multiple layers of camera and LiDAR image features to achieve high-resolution representations.
Our method achieved a mean translation error of $0.8751 mathrmcm$ and a mean rotation error of $0.0562 circ$ on the KITTI dataset.
arXiv Detail & Related papers (2023-11-26T08:59:30Z) - From Chaos to Calibration: A Geometric Mutual Information Approach to
Target-Free Camera LiDAR Extrinsic Calibration [4.378156825150505]
We propose a target free extrinsic calibration algorithm that requires no ground truth training data.
We demonstrate our proposed improvement using the KITTI and KITTI-360 fisheye data set.
arXiv Detail & Related papers (2023-11-03T13:30:31Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - LiDAR View Synthesis for Robust Vehicle Navigation Without Expert Labels [50.40632021583213]
We propose synthesizing additional LiDAR point clouds from novel viewpoints without physically driving at dangerous positions.
We train a deep learning model, which takes a LiDAR scan as input and predicts the future trajectory as output.
A waypoint controller is then applied to this predicted trajectory to determine the throttle and steering labels of the ego-vehicle.
arXiv Detail & Related papers (2023-08-02T20:46:43Z) - Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR [7.906477322731106]
An accurate extrinsic calibration is required to fuse the camera and LiDAR data into a common spatial reference frame required by high-level perception functions.
There is a need for continuous online extrinsic calibration algorithms which can automatically update the value of the camera-LiDAR calibration during the life of the vehicle using only sensor data.
We propose using mutual information between the camera image's depth estimate, provided by commonly available monocular depth estimation networks, and the LiDAR pointcloud's geometric distance as a optimization metric for extrinsic calibration.
arXiv Detail & Related papers (2023-06-22T23:16:31Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - TEScalib: Targetless Extrinsic Self-Calibration of LiDAR and Stereo
Camera for Automated Driving Vehicles with Uncertainty Analysis [4.616329048951671]
TEScalib is a novel extrinsic self-calibration approach of LiDAR and stereo camera.
It uses the geometric and photometric information of surrounding environments without any calibration targets for automated driving vehicles.
Our approach evaluated on the KITTI dataset achieves very promising results.
arXiv Detail & Related papers (2022-02-28T15:04:00Z) - LIF-Seg: LiDAR and Camera Image Fusion for 3D LiDAR Semantic
Segmentation [78.74202673902303]
We propose a coarse-tofine LiDAR and camera fusion-based network (termed as LIF-Seg) for LiDAR segmentation.
The proposed method fully utilizes the contextual information of images and introduces a simple but effective early-fusion strategy.
The cooperation of these two components leads to the success of the effective camera-LiDAR fusion.
arXiv Detail & Related papers (2021-08-17T08:53:11Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.