Neural Implicit Event Generator for Motion Tracking
- URL: http://arxiv.org/abs/2111.03824v1
- Date: Sat, 6 Nov 2021 07:38:52 GMT
- Title: Neural Implicit Event Generator for Motion Tracking
- Authors: Mana Masuda, Yusuke Sekikawa, Ryo Fujii, Hideo Saito
- Abstract summary: We present a novel framework of motion tracking from event data using implicit expression.
Our framework use pre-trained event generation named implicit event generator (IEG) and does motion tracking by updating its state (position and velocity) based on the difference between the observed event and generated event from the current state estimate.
We have confirmed that our framework works well in real-world environments in the presence of noise and background clutter.
- Score: 13.312655893024658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel framework of motion tracking from event data using
implicit expression. Our framework use pre-trained event generation MLP named
implicit event generator (IEG) and does motion tracking by updating its state
(position and velocity) based on the difference between the observed event and
generated event from the current state estimate. The difference is computed
implicitly by the IEG. Unlike the conventional explicit approach, which
requires dense computation to evaluate the difference, our implicit approach
realizes efficient state update directly from sparse event data. Our sparse
algorithm is especially suitable for mobile robotics applications where
computational resources and battery life are limited. To verify the
effectiveness of our method on real-world data, we applied it to the AR marker
tracking application. We have confirmed that our framework works well in
real-world environments in the presence of noise and background clutter.
Related papers
- Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Generalizing Event-Based Motion Deblurring in Real-World Scenarios [62.995994797897424]
Event-based motion deblurring has shown promising results by exploiting low-latency events.
We propose a scale-aware network that allows flexible input spatial scales and enables learning from different temporal scales of motion blur.
A two-stage self-supervised learning scheme is then developed to fit real-world data distribution.
arXiv Detail & Related papers (2023-08-11T04:27:29Z) - Continuous-Time Gaussian Process Motion-Compensation for Event-vision
Pattern Tracking with Distance Fields [4.168157981135697]
This work addresses the issue of motion compensation and pattern tracking in event camera data.
The proposed method decomposes the tracking problem into a local SE(2) motion-compensation step followed by a homography registration of small motion-compensated event batches.
Our open-source implementation performs high-accuracy motion compensation and produces high-quality tracks in real-world scenarios.
arXiv Detail & Related papers (2023-03-05T13:48:20Z) - Event Transformer+. A multi-purpose solution for efficient event data
processing [13.648678472312374]
Event cameras record sparse illumination changes with high temporal resolution and high dynamic range.
Current methods often ignore specific event-data properties, leading to the development of generic but computationally expensive algorithms.
We propose Event Transformer+, that improves our seminal work EvT with a refined patch-based event representation.
arXiv Detail & Related papers (2022-11-22T12:28:37Z) - ProgressiveMotionSeg: Mutually Reinforced Framework for Event-Based
Motion Segmentation [101.19290845597918]
This paper presents a Motion Estimation (ME) module and an Event Denoising (ED) module jointly optimized in a mutually reinforced manner.
Taking temporal correlation as guidance, ED module calculates the confidence that each event belongs to real activity events, and transmits it to ME module to update energy function of motion segmentation for noise suppression.
arXiv Detail & Related papers (2022-03-22T13:40:26Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z) - A Hybrid Neuromorphic Object Tracking and Classification Framework for
Real-time Systems [5.959466944163293]
This paper proposes a real-time, hybrid neuromorphic framework for object tracking and classification using event-based cameras.
Unlike traditional approaches of using event-by-event processing, this work uses a mixed frame and event approach to get energy savings with high performance.
arXiv Detail & Related papers (2020-07-21T07:11:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.