eKalibr-Inertial: Continuous-Time Spatiotemporal Calibration for Event-Based Visual-Inertial Systems
- URL: http://arxiv.org/abs/2509.05923v1
- Date: Sun, 07 Sep 2025 04:44:56 GMT
- Title: eKalibr-Inertial: Continuous-Time Spatiotemporal Calibration for Event-Based Visual-Inertial Systems
- Authors: Shuolong Chen, Xingxing Li, Liu Yuan,
- Abstract summary: In ego-motion estimation, the visual-inertial setup is commonly adopted due to complementary characteristics between sensors.<n>eKalibr-Inertial is an accuratetemporal calibrator for event-based visual-inertial systems.
- Score: 7.192326181604602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The bioinspired event camera, distinguished by its exceptional temporal resolution, high dynamic range, and low power consumption, has been extensively studied in recent years for motion estimation, robotic perception, and object detection. In ego-motion estimation, the visual-inertial setup is commonly adopted due to complementary characteristics between sensors (e.g., scale perception and low drift). For optimal event-based visual-inertial fusion, accurate spatiotemporal (extrinsic and temporal) calibration is required. In this work, we present eKalibr-Inertial, an accurate spatiotemporal calibrator for event-based visual-inertial systems, utilizing the widely used circle grid board. Building upon the grid pattern recognition and tracking methods in eKalibr and eKalibr-Stereo, the proposed method starts with a rigorous and efficient initialization, where all parameters in the estimator would be accurately recovered. Subsequently, a continuous-time-based batch optimization is conducted to refine the initialized parameters toward better states. The results of extensive real-world experiments show that eKalibr-Inertial can achieve accurate event-based visual-inertial spatiotemporal calibration. The implementation of eKalibr-Inertial is open-sourced at (https://github.com/Unsigned-Long/eKalibr) to benefit the research community.
Related papers
- Video Depth Propagation [54.523028170425256]
Existing methods rely on simple frame-by-frame monocular models, leading to temporal inconsistencies and inaccuracies.<n>We propose VeloDepth, which effectively leverages an online video pipeline and performs deep feature propagation.<n>Our design structurally enforces temporal consistency, resulting in stable depth predictions across consecutive frames with improved efficiency.
arXiv Detail & Related papers (2025-12-11T15:08:37Z) - Temporal and Rotational Calibration for Event-Centric Multi-Sensor Systems [24.110040599070796]
Event cameras generate asynchronous signals in response to pixel-level brightness changes.<n>We propose a motion-based temporal and rotational calibration framework tailored for event-centric multi-sensor systems.
arXiv Detail & Related papers (2025-08-18T01:53:27Z) - Inference-Time Gaze Refinement for Micro-Expression Recognition: Enhancing Event-Based Eye Tracking with Motion-Aware Post-Processing [2.5465367830324905]
Event-based eye tracking holds significant promise for fine-grained cognitive state inference.<n>We introduce a model-agnostic, inference-time refinement framework to enhance the output of existing event-based gaze estimation models.
arXiv Detail & Related papers (2025-06-14T14:48:11Z) - EMoTive: Event-guided Trajectory Modeling for 3D Motion Estimation [59.33052312107478]
Event cameras offer possibilities for 3D motion estimation through continuous adaptive pixel-level responses to scene changes.<n>This paper presents EMove, a novel event-based framework that models-uniform trajectories via event-guided parametric curves.<n>For motion representation, we introduce a density-aware adaptation mechanism to fuse spatial and temporal features under event guidance.<n>The final 3D motion estimation is achieved through multi-temporal sampling of parametric trajectories, flows and depth motion fields.
arXiv Detail & Related papers (2025-03-14T13:15:54Z) - eKalibr: Dynamic Intrinsic Calibration for Event Cameras From First Principles of Events [1.237454174824584]
We propose an intrinsic calibration method for event cameras, named eKalibr.<n>eKalibr builds upon a carefully designed event-based circle grid pattern recognition algorithm.<n>We conduct experiments to evaluate the performance of eKalibr in terms of pattern extraction and intrinsic calibration.
arXiv Detail & Related papers (2025-01-10T03:41:03Z) - Universal Online Temporal Calibration for Optimization-based Visual-Inertial Navigation Systems [13.416013522770905]
We propose a universal online temporal calibration strategy for optimization-based visual-inertial navigation systems.<n>We use the time offset td as a state parameter in the optimization residual model to align the IMU state to the corresponding image timestamp.<n>Our approach provides more accurate time offset estimation and faster convergence, particularly in the presence of noisy sensor data.
arXiv Detail & Related papers (2025-01-03T12:41:25Z) - MATE: Motion-Augmented Temporal Consistency for Event-based Point Tracking [58.719310295870024]
This paper presents an event-based framework for tracking any point.<n>To resolve ambiguities caused by event sparsity, a motion-guidance module incorporates kinematic vectors into the local matching process.<n>The method improves the $Survival_50$ metric by 17.9% over event-only tracking of any point baseline.
arXiv Detail & Related papers (2024-12-02T09:13:29Z) - Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor [0.4143603294943439]
In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms.
These functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor.
In this study, we provide two novel solutions to estimate these calibration parameters.
arXiv Detail & Related papers (2024-03-01T20:56:14Z) - Visual-tactile sensing for Real-time liquid Volume Estimation in
Grasping [58.50342759993186]
We propose a visuo-tactile model for realtime estimation of the liquid inside a deformable container.
We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor.
The robotic system is well controlled and adjusted based on the estimation model in real time.
arXiv Detail & Related papers (2022-02-23T13:38:31Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.