Learning Optical Flow from Event Camera with Rendered Dataset
- URL: http://arxiv.org/abs/2303.11011v1
- Date: Mon, 20 Mar 2023 10:44:32 GMT
- Title: Learning Optical Flow from Event Camera with Rendered Dataset
- Authors: Xinglong Luo, Kunming Luo, Ao Luo, Zhengning Wang, Ping Tan,
Shuaicheng Liu
- Abstract summary: We propose to render a physically correct event-flow dataset using computer graphics models.
In particular, we first create indoor and outdoor 3D scenes by Blender with rich scene content variations.
- Score: 45.4342948504988
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of estimating optical flow from event cameras. One
important issue is how to build a high-quality event-flow dataset with accurate
event values and flow labels. Previous datasets are created by either capturing
real scenes by event cameras or synthesizing from images with pasted foreground
objects. The former case can produce real event values but with calculated flow
labels, which are sparse and inaccurate. The later case can generate dense flow
labels but the interpolated events are prone to errors. In this work, we
propose to render a physically correct event-flow dataset using computer
graphics models. In particular, we first create indoor and outdoor 3D scenes by
Blender with rich scene content variations. Second, diverse camera motions are
included for the virtual capturing, producing images and accurate flow labels.
Third, we render high-framerate videos between images for accurate events. The
rendered dataset can adjust the density of events, based on which we further
introduce an adaptive density module (ADM). Experiments show that our proposed
dataset can facilitate event-flow learning, whereas previous approaches when
trained on our dataset can improve their performances constantly by a
relatively large margin. In addition, event-flow pipelines when equipped with
our ADM can further improve performances.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Evaluating Image-Based Face and Eye Tracking with Event Cameras [9.677797822200965]
Event Cameras, also known as Neuromorphic sensors, capture changes in local light intensity at the pixel level, producing asynchronously generated data termed events''
This data format mitigates common issues observed in conventional cameras, like under-sampling when capturing fast-moving objects.
We evaluate the viability of integrating conventional algorithms with event-based data, transformed into a frame format.
arXiv Detail & Related papers (2024-08-19T20:27:08Z) - Text-to-Events: Synthetic Event Camera Streams from Conditional Text Input [8.365349007799296]
Event cameras are advantageous for tasks that require vision sensors with low-latency and sparse output responses.
This paper reports a method for creating new labelled event datasets by using a text-to-X model.
We demonstrate that the model can generate realistic event sequences of human gestures prompted by different text statements.
arXiv Detail & Related papers (2024-06-05T16:34:12Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow Estimation [76.66876888943385]
Event cameras provide high temporal precision, low data rates, and high dynamic range visual perception.
We present a novel simulator, BlinkSim, for the fast generation of large-scale data for event-based optical flow.
arXiv Detail & Related papers (2023-03-14T09:03:54Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - The Spatio-Temporal Poisson Point Process: A Simple Model for the
Alignment of Event Camera Data [19.73526916714181]
Event cameras provide a natural and data efficient representation of visual information.
We propose a new model of event data that captures its natural-temporal structure.
We show new state of the art accuracy for rotational velocity estimation on the DAVIS 240C dataset.
arXiv Detail & Related papers (2021-06-13T00:43:27Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.