PseudoCal: Towards Initialisation-Free Deep Learning-Based Camera-LiDAR
Self-Calibration
- URL: http://arxiv.org/abs/2309.09855v1
- Date: Mon, 18 Sep 2023 15:15:35 GMT
- Title: PseudoCal: Towards Initialisation-Free Deep Learning-Based Camera-LiDAR
Self-Calibration
- Authors: Mathieu Cocheteux, Julien Moreau, Franck Davoine
- Abstract summary: Camera-LiDAR extrinsic calibration is a critical task for multi-sensor fusion in autonomous systems.
Existing techniques often require manual intervention or specific environments, making them labour-intensive and error-prone.
We present PseudoCal, a novel self-calibration method that overcomes these limitations by leveraging the pseudo-LiDAR concept and working directly in the 3D space.
- Score: 5.263910852465186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Camera-LiDAR extrinsic calibration is a critical task for multi-sensor fusion
in autonomous systems, such as self-driving vehicles and mobile robots.
Traditional techniques often require manual intervention or specific
environments, making them labour-intensive and error-prone. Existing deep
learning-based self-calibration methods focus on small realignments and still
rely on initial estimates, limiting their practicality. In this paper, we
present PseudoCal, a novel self-calibration method that overcomes these
limitations by leveraging the pseudo-LiDAR concept and working directly in the
3D space instead of limiting itself to the camera field of view. In typical
autonomous vehicle and robotics contexts and conventions, PseudoCal is able to
perform one-shot calibration quasi-independently of initial parameter
estimates, addressing extreme cases that remain unsolved by existing
approaches.
Related papers
- DST-Calib: A Dual-Path, Self-Supervised, Target-Free LiDAR-Camera Extrinsic Calibration Network [57.22935789233992]
This article presents the first self-supervised LiDAR-camera extrinsic calibration network that operates in an online fashion.<n>The proposed method significantly outperforms existing approaches in terms of generalizability.
arXiv Detail & Related papers (2026-01-03T13:57:01Z) - Tensor-Based Self-Calibration of Cameras via the TrifocalCalib Method [0.24998872534482344]
Estimating camera intrinsic parameters without prior scene knowledge is a fundamental challenge in computer vision.<n>We present a set of equations based on the calibrated trifocal tensor, enabling projective camera self-calibration from minimal image data.
arXiv Detail & Related papers (2025-09-22T11:31:57Z) - ARC-Calib: Autonomous Markerless Camera-to-Robot Calibration via Exploratory Robot Motions [15.004750210002152]
ARC-Calib is a model-based markerless camera-to-robot calibration framework.
It is fully autonomous and generalizable across diverse robots.
arXiv Detail & Related papers (2025-03-18T20:03:32Z) - CalibRefine: Deep Learning-Based Online Automatic Targetless LiDAR-Camera Calibration with Iterative and Attention-Driven Post-Refinement [5.069968819561576]
CalibRefine is a fully automatic, targetless, and online calibration framework.
We show that CalibRefine delivers high-precision calibration results with minimal human involvement.
Our findings highlight how robust object-level feature matching, together with iterative and self-supervised attention-based adjustments, enables consistent sensor fusion in complex, real-world conditions.
arXiv Detail & Related papers (2025-02-24T20:53:42Z) - Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - P2O-Calib: Camera-LiDAR Calibration Using Point-Pair Spatial Occlusion
Relationship [1.6921147361216515]
We propose a novel target-less calibration approach based on the 2D-3D edge point extraction using the occlusion relationship in 3D space.
Our method achieves low error and high robustness that can contribute to the practical applications relying on high-quality Camera-LiDAR calibration.
arXiv Detail & Related papers (2023-11-04T14:32:55Z) - Self-Supervised Online Camera Calibration for Automated Driving and
Parking Applications [1.6921067573076216]
This paper proposes a framework to learn intrinsic and extrinsic calibration of the camera in real time.
The framework is self-supervised and doesn't require any labelling or supervision to learn the calibration parameters.
arXiv Detail & Related papers (2023-08-16T16:49:50Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Automated Static Camera Calibration with Intelligent Vehicles [58.908194559319405]
We present a robust calibration method for automated geo-referenced camera calibration.
Our method requires a calibration vehicle equipped with a combined filtering/RTK receiver and an inertial measurement unit (IMU) for self-localization.
Our method does not require any human interaction with the information recorded by both the infrastructure and the vehicle.
arXiv Detail & Related papers (2023-04-21T08:50:52Z) - SceneCalib: Automatic Targetless Calibration of Cameras and Lidars in
Autonomous Driving [10.517099201352414]
SceneCalib is a novel method for simultaneous self-calibration of extrinsic and intrinsic parameters in a system containing multiple cameras and a lidar sensor.
We resolve issues with a fully automatic method that requires no explicit correspondences between camera images and lidar point clouds.
arXiv Detail & Related papers (2023-04-11T23:02:16Z) - Lasers to Events: Automatic Extrinsic Calibration of Lidars and Event
Cameras [67.84498757689776]
This paper presents the first direct calibration method between event cameras and lidars.
It removes dependencies on frame-based camera intermediaries and/or highly-accurate hand measurements.
arXiv Detail & Related papers (2022-07-03T11:05:45Z) - TEScalib: Targetless Extrinsic Self-Calibration of LiDAR and Stereo
Camera for Automated Driving Vehicles with Uncertainty Analysis [4.616329048951671]
TEScalib is a novel extrinsic self-calibration approach of LiDAR and stereo camera.
It uses the geometric and photometric information of surrounding environments without any calibration targets for automated driving vehicles.
Our approach evaluated on the KITTI dataset achieves very promising results.
arXiv Detail & Related papers (2022-02-28T15:04:00Z) - Estimating Egocentric 3D Human Pose in Global Space [70.7272154474722]
We present a new method for egocentric global 3D body pose estimation using a single-mounted fisheye camera.
Our approach outperforms state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-04-27T20:01:57Z) - CRLF: Automatic Calibration and Refinement based on Line Feature for
LiDAR and Camera in Road Scenes [16.201111055979453]
We propose a novel method to calibrate the extrinsic parameter for LiDAR and camera in road scenes.
Our method introduces line features from static straight-line-shaped objects such as road lanes and poles in both image and point cloud.
We conduct extensive experiments on KITTI and our in-house dataset, quantitative and qualitative results demonstrate the robustness and accuracy of our method.
arXiv Detail & Related papers (2021-03-08T06:02:44Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.