Exploiting Event Cameras for Spatio-Temporal Prediction of Fast-Changing
Trajectories
- URL: http://arxiv.org/abs/2001.01248v2
- Date: Wed, 15 Jan 2020 13:13:02 GMT
- Title: Exploiting Event Cameras for Spatio-Temporal Prediction of Fast-Changing
Trajectories
- Authors: Marco Monforte, Ander Arriandiaga, Arren Glover and Chiara Bartolozzi
- Abstract summary: This paper investigates trajectory prediction for robotics, to improve the interaction of robots with moving targets.
We apply state of the art machine learning, specifically based on Long-Short Term Memory (LSTM) architectures.
- Score: 7.13400854198045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates trajectory prediction for robotics, to improve the
interaction of robots with moving targets, such as catching a bouncing ball.
Unexpected, highly-non-linear trajectories cannot easily be predicted with
regression-based fitting procedures, therefore we apply state of the art
machine learning, specifically based on Long-Short Term Memory (LSTM)
architectures. In addition, fast moving targets are better sensed using event
cameras, which produce an asynchronous output triggered by spatial change,
rather than at fixed temporal intervals as with traditional cameras. We
investigate how LSTM models can be adapted for event camera data, and in
particular look at the benefit of using asynchronously sampled data.
Related papers
- BlinkTrack: Feature Tracking over 100 FPS via Events and Images [50.98675227695814]
We propose a novel framework, BlinkTrack, which integrates event data with RGB images for high-frequency feature tracking.
Our method extends the traditional Kalman filter into a learning-based framework, utilizing differentiable Kalman filters in both event and image branches.
Experimental results indicate that BlinkTrack significantly outperforms existing event-based methods.
arXiv Detail & Related papers (2024-09-26T15:54:18Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Event Camera-based Visual Odometry for Dynamic Motion Tracking of a
Legged Robot Using Adaptive Time Surface [5.341864681049579]
Event cameras offer high temporal resolution and dynamic range, which can eliminate the issue of blurred RGB images during fast movements.
We introduce an adaptive time surface (ATS) method that addresses the whiteout and blackout issue in conventional time surfaces.
Lastly, we propose a nonlinear pose optimization formula that simultaneously performs 3D-2D alignment on both RGB-based and event-based maps and images.
arXiv Detail & Related papers (2023-05-15T19:03:45Z) - Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and
Events [63.984927609545856]
Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals.
We show that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios.
arXiv Detail & Related papers (2023-04-14T05:30:02Z) - Fast Trajectory End-Point Prediction with Event Cameras for Reactive
Robot Control [4.110120522045467]
In this paper, we propose to exploit the low latency, motion-driven sampling, and data compression properties of event cameras to overcome these issues.
As a use-case, we use a Panda robotic arm to intercept a ball bouncing on a table.
We train the network in simulation to speed up the dataset acquisition and then fine-tune the models on real trajectories.
arXiv Detail & Related papers (2023-02-27T14:14:52Z) - PUCK: Parallel Surface and Convolution-kernel Tracking for Event-Based
Cameras [4.110120522045467]
Event-cameras can guarantee fast visual sensing in dynamic environments, but require a tracking algorithm that can keep up with the high data rate induced by the robot ego-motion.
We introduce a novel tracking method that leverages the Exponential Reduced Ordinal Surface (EROS) data representation to decouple event-by-event processing and tracking.
We propose the task of tracking the air hockey puck sliding on a surface, with the future aim of controlling the iCub robot to reach the target precisely and on time.
arXiv Detail & Related papers (2022-05-16T13:23:52Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking [35.95703377642108]
We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
arXiv Detail & Related papers (2020-02-14T08:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.