Continuous-Time Gaussian Process Motion-Compensation for Event-vision
Pattern Tracking with Distance Fields
- URL: http://arxiv.org/abs/2303.02672v1
- Date: Sun, 5 Mar 2023 13:48:20 GMT
- Title: Continuous-Time Gaussian Process Motion-Compensation for Event-vision
Pattern Tracking with Distance Fields
- Authors: Cedric Le Gentil, Ignacio Alzugaray, Teresa Vidal-Calleja
- Abstract summary: This work addresses the issue of motion compensation and pattern tracking in event camera data.
The proposed method decomposes the tracking problem into a local SE(2) motion-compensation step followed by a homography registration of small motion-compensated event batches.
Our open-source implementation performs high-accuracy motion compensation and produces high-quality tracks in real-world scenarios.
- Score: 4.168157981135697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work addresses the issue of motion compensation and pattern tracking in
event camera data. An event camera generates asynchronous streams of events
triggered independently by each of the pixels upon changes in the observed
intensity. Providing great advantages in low-light and rapid-motion scenarios,
such unconventional data present significant research challenges as traditional
vision algorithms are not directly applicable to this sensing modality. The
proposed method decomposes the tracking problem into a local SE(2)
motion-compensation step followed by a homography registration of small
motion-compensated event batches. The first component relies on Gaussian
Process (GP) theory to model the continuous occupancy field of the events in
the image plane and embed the camera trajectory in the covariance kernel
function. In doing so, estimating the trajectory is done similarly to GP
hyperparameter learning by maximising the log marginal likelihood of the data.
The continuous occupancy fields are turned into distance fields and used as
templates for homography-based registration. By benchmarking the proposed
method against other state-of-the-art techniques, we show that our open-source
implementation performs high-accuracy motion compensation and produces
high-quality tracks in real-world scenarios.
Related papers
- ESVO2: Direct Visual-Inertial Odometry with Stereo Event Cameras [33.81592783496106]
Event-based visual odometry aims at solving tracking and mapping sub-problems in parallel.
We build an event-based stereo visual-inertial odometry system on top of our previous direct pipeline Event-based Stereo Visual Odometry.
arXiv Detail & Related papers (2024-10-12T05:35:27Z) - GS-EVT: Cross-Modal Event Camera Tracking based on Gaussian Splatting [19.0745952177123]
This paper explores the use of event cameras for motion tracking.
It provides a solution with inherent robustness under difficult dynamics and illumination.
It tracks a map representation that comes directly from frame-based cameras.
arXiv Detail & Related papers (2024-09-28T03:56:39Z) - Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation [34.529280562470746]
We introduce a novel self-supervised loss combining the Contrast Maximization framework with a non-linear motion prior in the form of pixel-level trajectories.
Their effectiveness is demonstrated in two scenarios: In dense continuous-time motion estimation, our method improves the zero-shot performance of a synthetically trained model by 29%.
arXiv Detail & Related papers (2024-07-15T15:18:28Z) - IMU-Aided Event-based Stereo Visual Odometry [7.280676899773076]
We improve our previous direct pipeline textitEvent-based Stereo Visual Odometry in terms of accuracy and efficiency.
To speed up the mapping operation, we propose an efficient strategy of edge-pixel sampling according to the local dynamics of events.
We release our pipeline as an open-source software for future research in this field.
arXiv Detail & Related papers (2024-05-07T07:19:25Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.