Environment-Driven Online LiDAR-Camera Extrinsic Calibration
- URL: http://arxiv.org/abs/2502.00801v1
- Date: Sun, 02 Feb 2025 13:52:35 GMT
- Title: Environment-Driven Online LiDAR-Camera Extrinsic Calibration
- Authors: Zhiwei Huang, Jiaqi Li, Ping Zhong, Rui Fan,
- Abstract summary: EdO-LCEC is the first environment-driven, online calibration approach that achieves human-like adaptability.<n>Inspired by the human system, EdO-LCEC incorporates a generalizable scene discriminator to actively interpret environmental conditions.<n>Our approach formulates the calibration process as a spatial-temporal joint optimization problem.
- Score: 8.096139287822997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR-camera extrinsic calibration (LCEC) is the core for data fusion in computer vision. Existing methods typically rely on customized calibration targets or fixed scene types, lacking the flexibility to handle variations in sensor data and environmental contexts. This paper introduces EdO-LCEC, the first environment-driven, online calibration approach that achieves human-like adaptability. Inspired by the human perceptual system, EdO-LCEC incorporates a generalizable scene discriminator to actively interpret environmental conditions, creating multiple virtual cameras that capture detailed spatial and textural information. To overcome cross-modal feature matching challenges between LiDAR and camera, we propose dual-path correspondence matching (DPCM), which leverages both structural and textural consistency to achieve reliable 3D-2D correspondences. Our approach formulates the calibration process as a spatial-temporal joint optimization problem, utilizing global constraints from multiple views and scenes to improve accuracy, particularly in sparse or partially overlapping sensor views. Extensive experiments on real-world datasets demonstrate that EdO-LCEC achieves state-of-the-art performance, providing reliable and precise calibration across diverse, challenging environments.
Related papers
- Targetless LiDAR-Camera Calibration with Anchored 3D Gaussians [21.057702337896995]
We present a targetless LiDAR-camera calibration method that jointly optimize sensor poses and scene geometry from arbitrary scenes.
We validate our method through extensive experiments on two real-world autonomous driving datasets.
arXiv Detail & Related papers (2025-04-06T20:00:01Z) - CalibRefine: Deep Learning-Based Online Automatic Targetless LiDAR-Camera Calibration with Iterative and Attention-Driven Post-Refinement [5.069968819561576]
CalibRefine is a fully automatic, targetless, and online calibration framework.
We show that CalibRefine delivers high-precision calibration results with minimal human involvement.
Our findings highlight how robust object-level feature matching, together with iterative and self-supervised attention-based adjustments, enables consistent sensor fusion in complex, real-world conditions.
arXiv Detail & Related papers (2025-02-24T20:53:42Z) - What Really Matters for Learning-based LiDAR-Camera Calibration [50.2608502974106]
This paper revisits the development of learning-based LiDAR-Camera calibration.<n>We identify the critical limitations of regression-based methods with the widely used data generation pipeline.<n>We also investigate how the input data format and preprocessing operations impact network performance.
arXiv Detail & Related papers (2025-01-28T14:12:32Z) - Online,Target-Free LiDAR-Camera Extrinsic Calibration via Cross-Modal Mask Matching [16.13886663417327]
We introduce a novel framework known as MIAS-LCEC, provide an open-source versatile calibration toolbox, and publish three real-world datasets.
The cornerstone of our framework and toolbox is the cross-modal mask matching (C3M) algorithm, developed based on a state-of-the-art (SoTA) LVM.
arXiv Detail & Related papers (2024-04-28T06:25:56Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - CROON: Automatic Multi-LiDAR Calibration and Refinement Method in Road
Scene [15.054452813705112]
CROON (automatiC multi-LiDAR CalibratiOn and Refinement method in rOad sceNe) is a two-stage method including rough and refinement calibration.
Results on real-world and simulated data sets demonstrate the reliability and accuracy of our method.
arXiv Detail & Related papers (2022-03-07T07:36:31Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - CRLF: Automatic Calibration and Refinement based on Line Feature for
LiDAR and Camera in Road Scenes [16.201111055979453]
We propose a novel method to calibrate the extrinsic parameter for LiDAR and camera in road scenes.
Our method introduces line features from static straight-line-shaped objects such as road lanes and poles in both image and point cloud.
We conduct extensive experiments on KITTI and our in-house dataset, quantitative and qualitative results demonstrate the robustness and accuracy of our method.
arXiv Detail & Related papers (2021-03-08T06:02:44Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.