LiREC-Net: A Target-Free and Learning-Based Network for LiDAR, RGB, and Event Calibration
- URL: http://arxiv.org/abs/2602.21754v1
- Date: Wed, 25 Feb 2026 10:08:14 GMT
- Title: LiREC-Net: A Target-Free and Learning-Based Network for LiDAR, RGB, and Event Calibration
- Authors: Aditya Ranjan Dash, Ramy Battrawy, René Schuster, Didier Stricker,
- Abstract summary: LiREC-Net is a target-free, learning-based calibration network that jointly calibrates multiple sensor modality pairs.<n>We introduce a shared LiDAR representation that leverages computation features from both its 3D nature and projected depth map.<n>Our LiREC-Net achieves competitive performance to bi-modal models and sets a new strong baseline for the tri-modal use case.
- Score: 18.479441935331156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advanced autonomous systems rely on multi-sensor fusion for safer and more robust perception. To enable effective fusion, calibrating directly from natural driving scenes (i.e., target-free) with high accuracy is crucial for precise multi-sensor alignment. Existing learning-based calibration methods are typically designed for only a single pair of sensor modalities (i.e., a bi-modal setup). Unlike these methods, we propose LiREC-Net, a target-free, learning-based calibration network that jointly calibrates multiple sensor modality pairs, including LiDAR, RGB, and event data, within a unified framework. To reduce redundant computation and improve efficiency, we introduce a shared LiDAR representation that leverages features from both its 3D nature and projected depth map, ensuring better consistency across modalities. Trained and evaluated on established datasets, such as KITTI and DSEC, our LiREC-Net achieves competitive performance to bi-modal models and sets a new strong baseline for the tri-modal use case.
Related papers
- DST-Calib: A Dual-Path, Self-Supervised, Target-Free LiDAR-Camera Extrinsic Calibration Network [57.22935789233992]
This article presents the first self-supervised LiDAR-camera extrinsic calibration network that operates in an online fashion.<n>The proposed method significantly outperforms existing approaches in terms of generalizability.
arXiv Detail & Related papers (2026-01-03T13:57:01Z) - LiteFusion: Taming 3D Object Detectors from Vision-Based to Multi-Modal with Minimal Adaptation [23.72983078807998]
Current 3D object detectors rely on complex architectures and training strategies to achieve higher detection accuracy.<n>These methods heavily rely on the LiDAR sensor so that they suffer from large performance drops when LiDAR is absent.<n>We introduce a novel multi-modal 3D detector, LiteFusion, which integrates complementary features from LiDAR points into image features within a quaternion space.
arXiv Detail & Related papers (2025-12-23T10:16:33Z) - CalibRefine: Deep Learning-Based Online Automatic Targetless LiDAR-Camera Calibration with Iterative and Attention-Driven Post-Refinement [7.736775961390864]
CalibRefine is a fully automatic, targetless, and online calibration framework.<n>It directly processes raw LiDAR point clouds and camera images.<n>Our results show that robust object-level feature matching, combined with iterative refinement and self-supervised attention-based refinement, enables reliable sensor alignment.
arXiv Detail & Related papers (2025-02-24T20:53:42Z) - What Really Matters for Learning-based LiDAR-Camera Calibration [50.2608502974106]
This paper revisits the development of learning-based LiDAR-Camera calibration.<n>We identify the critical limitations of regression-based methods with the widely used data generation pipeline.<n>We also investigate how the input data format and preprocessing operations impact network performance.
arXiv Detail & Related papers (2025-01-28T14:12:32Z) - Validation & Exploration of Multimodal Deep-Learning Camera-Lidar Calibration models [0.0]
This article presents an innovative study in exploring, evaluating, and implementing deep learning architectures for the calibration of multi-modal sensor systems.
The focus is to leverage the use of sensor fusion to achieve dynamic, real-time alignment between 3D LiDAR and 2D Camera sensors.
arXiv Detail & Related papers (2024-09-20T11:03:49Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.<n>Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.<n>This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - End-To-End Optimization of LiDAR Beam Configuration for 3D Object
Detection and Localization [87.56144220508587]
We take a new route to learn to optimize the LiDAR beam configuration for a given application.
We propose a reinforcement learning-based learning-to-optimize framework to automatically optimize the beam configuration.
Our method is especially useful when a low-resolution (low-cost) LiDAR is needed.
arXiv Detail & Related papers (2022-01-11T09:46:31Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.