Dense Continuous-Time Optical Flow from Events and Frames
- URL: http://arxiv.org/abs/2203.13674v2
- Date: Sun, 11 Feb 2024 15:26:43 GMT
- Title: Dense Continuous-Time Optical Flow from Events and Frames
- Authors: Mathias Gehrig and Manasi Muglikar and Davide Scaramuzza
- Abstract summary: We show that it is possible to compute per-pixel, continuous-time optical flow using events from an event camera.
We leverage these benefits to predict pixel trajectories densely in continuous time via parameterized B'ezier curves.
Our model is the first method that can regress dense pixel trajectories from event data.
- Score: 27.1850072968441
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a method for estimating dense continuous-time optical flow from
event data. Traditional dense optical flow methods compute the pixel
displacement between two images. Due to missing information, these approaches
cannot recover the pixel trajectories in the blind time between two images. In
this work, we show that it is possible to compute per-pixel, continuous-time
optical flow using events from an event camera. Events provide temporally
fine-grained information about movement in pixel space due to their
asynchronous nature and microsecond response time. We leverage these benefits
to predict pixel trajectories densely in continuous time via parameterized
B\'ezier curves. To achieve this, we build a neural network with strong
inductive biases for this task: First, we build multiple sequential correlation
volumes in time using event data. Second, we use B\'ezier curves to index these
correlation volumes at multiple timestamps along the trajectory. Third, we use
the retrieved correlation to update the B\'ezier curve representations
iteratively. Our method can optionally include image pairs to boost performance
further. To the best of our knowledge, our model is the first method that can
regress dense pixel trajectories from event data. To train and evaluate our
model, we introduce a synthetic dataset (MultiFlow) that features moving
objects and ground truth trajectories for every pixel. Our quantitative
experiments not only suggest that our method successfully predicts pixel
trajectories in continuous time but also that it is competitive in the
traditional two-view pixel displacement metric on MultiFlow and DSEC-Flow. Open
source code and datasets are released to the public.
Related papers
- MemFlow: Optical Flow Estimation and Prediction with Memory [54.22820729477756]
We present MemFlow, a real-time method for optical flow estimation and prediction with memory.
Our method enables memory read-out and update modules for aggregating historical motion information in real-time.
Our approach seamlessly extends to the future prediction of optical flow based on past observations.
arXiv Detail & Related papers (2024-04-07T04:56:58Z) - Graph-based Asynchronous Event Processing for Rapid Object Recognition [59.112755601918074]
Event cameras capture asynchronous events stream in which each event encodes pixel location, trigger time, and the polarity of the brightness changes.
We introduce a novel graph-based framework for event cameras, namely SlideGCN.
Our approach can efficiently process data event-by-event, unlock the low latency nature of events data while still maintaining the graph's structure internally.
arXiv Detail & Related papers (2023-08-28T08:59:57Z) - Optical flow estimation from event-based cameras and spiking neural
networks [0.4899818550820575]
Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs)
We propose a U-Net-like SNN which, after supervised training, is able to make dense optical flow estimations.
Thanks to separable convolutions, we have been able to develop a light model that can nonetheless yield reasonably accurate optical flow estimates.
arXiv Detail & Related papers (2023-02-13T16:17:54Z) - Deep Dynamic Scene Deblurring from Optical Flow [53.625999196063574]
Deblurring can provide visually more pleasant pictures and make photography more convenient.
It is difficult to model the non-uniform blur mathematically.
We develop a convolutional neural network (CNN) to restore the sharp images from the deblurred features.
arXiv Detail & Related papers (2023-01-18T06:37:21Z) - Learning Dense and Continuous Optical Flow from an Event Camera [28.77846425802558]
Event cameras such as DAVIS can simultaneously output high temporal resolution events and low frame-rate intensity images.
Most of the existing optical flow estimation methods are based on two consecutive image frames and can only estimate discrete flow at a fixed time interval.
We propose a novel deep learning-based dense and continuous optical flow estimation framework from a single image with event streams.
arXiv Detail & Related papers (2022-11-16T17:53:18Z) - RealFlow: EM-based Realistic Optical Flow Dataset Generation from Videos [28.995525297929348]
RealFlow is a framework that can create large-scale optical flow datasets directly from unlabeled realistic videos.
We first estimate optical flow between a pair of video frames, and then synthesize a new image from this pair based on the predicted flow.
Our approach achieves state-of-the-art performance on two standard benchmarks compared with both supervised and unsupervised optical flow methods.
arXiv Detail & Related papers (2022-07-22T13:33:03Z) - Particle Videos Revisited: Tracking Through Occlusions Using Point
Trajectories [29.258861811749103]
We revisit Sand and Teller's "particle video" approach, and study pixel tracking as a long-range motion estimation problem.
We re-build this classic approach using components that drive the current state-of-the-art in flow and object tracking.
We train our models using long-range amodal point trajectories mined from existing optical flow datasets.
arXiv Detail & Related papers (2022-04-08T16:05:48Z) - SCFlow: Optical Flow Estimation for Spiking Camera [50.770803466875364]
Spiking camera has enormous potential in real applications, especially for motion estimation in high-speed scenes.
Optical flow estimation has achieved remarkable success in image-based and event-based vision, but % existing methods cannot be directly applied in spike stream from spiking camera.
This paper presents, SCFlow, a novel deep learning pipeline for optical flow estimation for spiking camera.
arXiv Detail & Related papers (2021-10-08T06:16:45Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - OmniFlow: Human Omnidirectional Optical Flow [0.0]
Our paper presents OmniFlow: a new synthetic omnidirectional human optical flow dataset.
Based on a rendering engine we create a naturalistic 3D indoor environment with textured rooms, characters, actions, objects, illumination and motion blur.
The simulation has as output rendered images of household activities and the corresponding forward and backward optical flow.
arXiv Detail & Related papers (2021-04-16T08:25:20Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.