Tight Fusion of Events and Inertial Measurements for Direct Velocity
Estimation
- URL: http://arxiv.org/abs/2401.09296v1
- Date: Wed, 17 Jan 2024 15:56:57 GMT
- Title: Tight Fusion of Events and Inertial Measurements for Direct Velocity
Estimation
- Authors: Wanting Xu, Xin Peng and Laurent Kneip
- Abstract summary: We propose a novel solution to tight visual-inertial fusion directly at the level of first-order kinematics by employing a dynamic vision sensor instead of a normal camera.
We demonstrate how velocity estimates in highly dynamic situations can be obtained over short time intervals.
Experiments on both simulated and real data demonstrate that the proposed tight event-inertial fusion leads to continuous and reliable velocity estimation.
- Score: 20.002238735553792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional visual-inertial state estimation targets absolute camera poses
and spatial landmark locations while first-order kinematics are typically
resolved as an implicitly estimated sub-state. However, this poses a risk in
velocity-based control scenarios, as the quality of the estimation of
kinematics depends on the stability of absolute camera and landmark coordinates
estimation. To address this issue, we propose a novel solution to tight
visual-inertial fusion directly at the level of first-order kinematics by
employing a dynamic vision sensor instead of a normal camera. More
specifically, we leverage trifocal tensor geometry to establish an incidence
relation that directly depends on events and camera velocity, and demonstrate
how velocity estimates in highly dynamic situations can be obtained over short
time intervals. Noise and outliers are dealt with using a nested two-layer
RANSAC scheme. Additionally, smooth velocity signals are obtained from a tight
fusion with pre-integrated inertial signals using a sliding window optimizer.
Experiments on both simulated and real data demonstrate that the proposed tight
event-inertial fusion leads to continuous and reliable velocity estimation in
highly dynamic scenarios independently of absolute coordinates. Furthermore, in
extreme cases, it achieves more stable and more accurate estimation of
kinematics than traditional, point-position-based visual-inertial odometry.
Related papers
- Event-based Visual Deformation Measurement [76.25283405575108]
Visual Deformation Measurement aims to recover dense deformation fields by tracking surface motion from camera observations.<n>Traditional image-based methods rely on minimal inter-frame motion to constrain the correspondence search space.<n>We propose an event-frame fusion framework that exploits events for temporally dense motion cues and frames for spatially dense precise estimation.
arXiv Detail & Related papers (2026-02-16T01:04:48Z) - KineST: A Kinematics-guided Spatiotemporal State Space Model for Human Motion Tracking from Sparse Signals [11.14439818111551]
Full-body motion tracking plays an essential role in AR/VR applications, bridging physical and virtual interactions.<n>It is challenging to reconstruct realistic and diverse full-body poses based on sparse signals obtained by head-mounted displays.<n>Existing methods for pose reconstruction often incur high computational costs or rely on separately spatial modeling and temporal dependencies.<n>We propose KineST, a novel kinematics-guided state space model, which effectively extracts geometric dependencies while integrating local and global pose perception.
arXiv Detail & Related papers (2025-12-18T17:25:47Z) - DeLiVR: Differential Spatiotemporal Lie Bias for Efficient Video Deraining [21.816338275013702]
We propose DeLiVR, an efficient video deraining method that injects Lie-group differential biases directly into attention scores of the network.<n>A rotation-bounded Lie relative bias predicts the in-plane angle of each frame using a compact prediction module.<n>A differential group displacement computes angular differences between frames adjacent to estimate a velocity.<n>This bias combines temporal decay and attention masks to focus on inter-frame relationships while precisely matching the direction of rain streaks.
arXiv Detail & Related papers (2025-09-26T00:29:36Z) - Motion Segmentation and Egomotion Estimation from Event-Based Normal Flow [8.869407907066005]
This paper introduces a robust framework for motion segmentation and egomotion estimation using event-based normal flow.<n>Our approach exploits the sparse, high-temporal-resolution event data and incorporates geometric constraints between normal flow, scene structure, and inertial measurements.
arXiv Detail & Related papers (2025-07-19T06:11:09Z) - Planar Velocity Estimation for Fast-Moving Mobile Robots Using Event-Based Optical Flow [1.4447019135112429]
We introduce an approach to velocity estimation that is decoupled from wheel-to-surface traction assumptions.<n>The proposed method is evaluated through in-field experiments on a 1:10 scale autonomous racing platform.
arXiv Detail & Related papers (2025-05-16T11:00:33Z) - EMoTive: Event-guided Trajectory Modeling for 3D Motion Estimation [59.33052312107478]
Event cameras offer possibilities for 3D motion estimation through continuous adaptive pixel-level responses to scene changes.
This paper presents EMove, a novel event-based framework that models-uniform trajectories via event-guided parametric curves.
For motion representation, we introduce a density-aware adaptation mechanism to fuse spatial and temporal features under event guidance.
The final 3D motion estimation is achieved through multi-temporal sampling of parametric trajectories, flows and depth motion fields.
arXiv Detail & Related papers (2025-03-14T13:15:54Z) - Event-Based Tracking Any Point with Motion-Augmented Temporal Consistency [58.719310295870024]
This paper presents an event-based framework for tracking any point.
It tackles the challenges posed by spatial sparsity and motion sensitivity in events.
It achieves 150% faster processing with competitive model parameters.
arXiv Detail & Related papers (2024-12-02T09:13:29Z) - DATAP-SfM: Dynamic-Aware Tracking Any Point for Robust Structure from Motion in the Wild [85.03973683867797]
This paper proposes a concise, elegant, and robust pipeline to estimate smooth camera trajectories and obtain dense point clouds for casual videos in the wild.
We show that the proposed method achieves state-of-the-art performance in terms of camera pose estimation even in complex dynamic challenge scenes.
arXiv Detail & Related papers (2024-11-20T13:01:16Z) - EVIT: Event-based Visual-Inertial Tracking in Semi-Dense Maps Using Windowed Nonlinear Optimization [19.915476815328294]
Event cameras are an interesting visual exteroceptive sensor that reacts to brightness changes rather than integrating absolute image intensities.
This paper proposes the addition of inertial signals in order to robustify the estimation.
Our evaluation focuses on a diverse set of real world sequences and comprises a comparison of our proposed method against a purely event-based alternative running at different rates.
arXiv Detail & Related papers (2024-08-02T16:24:55Z) - Event-Aided Time-to-Collision Estimation for Autonomous Driving [28.13397992839372]
We present a novel method that estimates the time to collision using a neuromorphic event-based camera.
The proposed algorithm consists of a two-step approach for efficient and accurate geometric model fitting on event data.
Experiments on both synthetic and real data demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-07-10T02:37:36Z) - Event-Based Visual Odometry on Non-Holonomic Ground Vehicles [20.847519645153337]
Event-based visual odometry is shown to be reliable and robust in challenging illumination scenarios.
Our algorithm achieves accurate estimates of the vehicle's rotational velocity and thus results that are comparable to the delta rotations obtained by frame-based sensors under normal conditions.
arXiv Detail & Related papers (2024-01-17T16:52:20Z) - A 5-Point Minimal Solver for Event Camera Relative Motion Estimation [47.45081895021988]
We introduce a novel minimal 5-point solver that estimates line parameters and linear camera velocity projections, which can be fused into a single, averaged linear velocity when considering multiple lines.
Our method consistently achieves a 100% success rate in estimating linear velocity where existing closed-form solvers only achieve between 23% and 70%.
arXiv Detail & Related papers (2023-09-29T08:30:18Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Continuous Event-Line Constraint for Closed-Form Velocity Initialization [0.0]
Event cameras trigger events asynchronously and independently upon a sufficient change of the logarithmic brightness level.
We propose the continuous event-line constraint, which relies on a constant-velocity motion assumption as well as trifocal geometry in order to express a relationship between line observations given by event clusters as well as first-order camera dynamics.
arXiv Detail & Related papers (2021-09-09T14:39:56Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.