Lidar and Camera Self-Calibration using CostVolume Network
- URL: http://arxiv.org/abs/2012.13901v1
- Date: Sun, 27 Dec 2020 09:41:45 GMT
- Title: Lidar and Camera Self-Calibration using CostVolume Network
- Authors: Xudong Lv, Boya Wang, Dong Ye, Shuo Wang
- Abstract summary: Instead of regressing the parameters between LiDAR and camera directly, we predict the decalibrated deviation from initial calibration to the ground truth.
Our approach outperforms CNN-based state-of-the-art methods in terms of a mean absolute calibration error of 0.297cm in translation and 0.017deg in rotation with miscalibration magnitudes of up to 1.5m and 20deg.
- Score: 3.793450497896671
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel online self-calibration approach for Light
Detection and Ranging (LiDAR) and camera sensors. Compared to the previous
CNN-based methods that concatenate the feature maps of the RGB image and
decalibrated depth image, we exploit the cost volume inspired by the PWC-Net
for feature matching. Besides the smooth L1-Loss of the predicted extrinsic
calibration parameters, an additional point cloud loss is applied. Instead of
regress the extrinsic parameters between LiDAR and camera directly, we predict
the decalibrated deviation from initial calibration to the ground truth. During
inference, the calibration error decreases further with the usage of iterative
refinement and the temporal filtering approach. The evaluation results on the
KITTI dataset illustrate that our approach outperforms CNN-based
state-of-the-art methods in terms of a mean absolute calibration error of
0.297cm in translation and 0.017{\deg} in rotation with miscalibration
magnitudes of up to 1.5m and 20{\deg}.
Related papers
- YOCO: You Only Calibrate Once for Accurate Extrinsic Parameter in LiDAR-Camera Systems [0.5999777817331317]
In a multi-sensor fusion system composed of cameras and LiDAR, precise extrinsic calibration contributes to the system's long-term stability and accurate perception of the environment.
This paper proposes a novel fully automatic extrinsic calibration method for LiDAR-camera systems that circumvents the need for corresponding point registration.
arXiv Detail & Related papers (2024-07-25T13:44:49Z) - Orthogonal Causal Calibration [55.28164682911196]
We prove generic upper bounds on the calibration error of any causal parameter estimate $theta$ with respect to any loss $ell$.
We use our bound to analyze the convergence of two sample splitting algorithms for causal calibration.
arXiv Detail & Related papers (2024-06-04T03:35:25Z) - CalibFormer: A Transformer-based Automatic LiDAR-Camera Calibration Network [11.602943913324653]
CalibFormer is an end-to-end network for automatic LiDAR-camera calibration.
We aggregate multiple layers of camera and LiDAR image features to achieve high-resolution representations.
Our method achieved a mean translation error of $0.8751 mathrmcm$ and a mean rotation error of $0.0562 circ$ on the KITTI dataset.
arXiv Detail & Related papers (2023-11-26T08:59:30Z) - EdgeCalib: Multi-Frame Weighted Edge Features for Automatic Targetless
LiDAR-Camera Calibration [15.057994140880373]
We introduce an edge-based approach for automatic online calibration of LiDAR and cameras in real-world scenarios.
The edge features, which are prevalent in various environments, are aligned in both images and point clouds to determine the extrinsic parameters.
The results show a state-of-the-art rotation accuracy of 0.086deg and a translation accuracy of 0.977 cm, outperforming existing edge-based calibration methods in both precision and robustness.
arXiv Detail & Related papers (2023-10-25T13:27:56Z) - End-to-End Lidar-Camera Self-Calibration for Autonomous Vehicles [0.0]
CaLiCa is an end-to-end self-calibration network for Lidar and pinhole cameras.
We achieve 0.154 deg and 0.059 m accuracy with a reprojection error of 0.028 pixel with a single-pass inference.
arXiv Detail & Related papers (2023-04-24T19:44:23Z) - An Adaptive Method for Camera Attribution under Complex Radial
Distortion Corrections [77.34726150561087]
In-camera or out-camera software/firmware alters the supporting grid of the image so as to hamper PRNU-based camera attribution.
Existing solutions to deal with this problem try to invert/estimate the correction using radial transformations parameterized with few variables in order to restrain the computational load.
We propose an adaptive algorithm that by dividing the image into concentric annuli is able to deal with sophisticated corrections like those applied out-camera by third party software like Adobe Lightroom, Photoshop, Gimp and PT-Lens.
arXiv Detail & Related papers (2023-02-28T08:44:00Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z) - SOIC: Semantic Online Initialization and Calibration for LiDAR and
Camera [18.51029962714994]
This paper presents a novel semantic-based online calibration approach, SOIC, for LiDAR and camera sensors.
We evaluate the proposed method either with GT or predicted on KITTI dataset.
arXiv Detail & Related papers (2020-03-09T17:02:31Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z) - Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [156.18634427704583]
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
arXiv Detail & Related papers (2020-01-19T13:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.