BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow
Estimation
- URL: http://arxiv.org/abs/2303.07716v1
- Date: Tue, 14 Mar 2023 09:03:54 GMT
- Title: BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow
Estimation
- Authors: Yijin Li, Zhaoyang Huang, Shuo Chen, Xiaoyu Shi, Hongsheng Li, Hujun
Bao, Zhaopeng Cui, Guofeng Zhang
- Abstract summary: We present a novel simulator, BlinkSim, for the fast generation of large-scale data for event-based optical flow.
Based on BlinkSim, we construct a large training dataset and evaluation benchmark BlinkFlow.
Experiments show that BlinkFlow improves the generalization performance of state-of-the-art methods by more than 40% on average and up to 90%.
- Score: 54.24083734729374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras provide high temporal precision, low data rates, and high
dynamic range visual perception, which are well-suited for optical flow
estimation. While data-driven optical flow estimation has obtained great
success in RGB cameras, its generalization performance is seriously hindered in
event cameras mainly due to the limited and biased training data. In this
paper, we present a novel simulator, BlinkSim, for the fast generation of
large-scale data for event-based optical flow. BlinkSim consists of a
configurable rendering engine and a flexible engine for event data simulation.
By leveraging the wealth of current 3D assets, the rendering engine enables us
to automatically build up thousands of scenes with different objects, textures,
and motion patterns and render very high-frequency images for realistic event
data simulation. Based on BlinkSim, we construct a large training dataset and
evaluation benchmark BlinkFlow that contains sufficient, diversiform, and
challenging event data with optical flow ground truth. Experiments show that
BlinkFlow improves the generalization performance of state-of-the-art methods
by more than 40% on average and up to 90%. Moreover, we further propose an
Event optical Flow transFormer (E-FlowFormer) architecture. Powered by our
BlinkFlow, E-FlowFormer outperforms the SOTA methods by up to 91% on MVSEC
dataset and 14% on DSEC dataset and presents the best generalization
performance.
Related papers
- RPEFlow: Multimodal Fusion of RGB-PointCloud-Event for Joint Optical
Flow and Scene Flow Estimation [43.358140897849616]
In this paper, we incorporate RGB images, Point clouds and Events for joint optical flow and scene flow estimation with our proposed multi-stage multimodal fusion model, RPEFlow.
Experiments on both synthetic and real datasets show that our model outperforms the existing state-of-the-art by a wide margin.
arXiv Detail & Related papers (2023-09-26T17:23:55Z) - Towards Anytime Optical Flow Estimation with Event Cameras [35.685866753715416]
Event cameras are capable of responding to log-brightness changes in microseconds.
Existing datasets collected via event cameras provide limited frame rate optical flow ground truth.
We propose EVA-Flow, an EVent-based Anytime Flow estimation network to produce high-frame-rate event optical flow.
arXiv Detail & Related papers (2023-07-11T06:15:12Z) - RealFlow: EM-based Realistic Optical Flow Dataset Generation from Videos [28.995525297929348]
RealFlow is a framework that can create large-scale optical flow datasets directly from unlabeled realistic videos.
We first estimate optical flow between a pair of video frames, and then synthesize a new image from this pair based on the predicted flow.
Our approach achieves state-of-the-art performance on two standard benchmarks compared with both supervised and unsupervised optical flow methods.
arXiv Detail & Related papers (2022-07-22T13:33:03Z) - SCFlow: Optical Flow Estimation for Spiking Camera [50.770803466875364]
Spiking camera has enormous potential in real applications, especially for motion estimation in high-speed scenes.
Optical flow estimation has achieved remarkable success in image-based and event-based vision, but % existing methods cannot be directly applied in spike stream from spiking camera.
This paper presents, SCFlow, a novel deep learning pipeline for optical flow estimation for spiking camera.
arXiv Detail & Related papers (2021-10-08T06:16:45Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - AutoFlow: Learning a Better Training Set for Optical Flow [62.40293188964933]
AutoFlow is a method to render training data for optical flow.
AutoFlow achieves state-of-the-art accuracy in pre-training both PWC-Net and RAFT.
arXiv Detail & Related papers (2021-04-29T17:55:23Z) - OmniFlow: Human Omnidirectional Optical Flow [0.0]
Our paper presents OmniFlow: a new synthetic omnidirectional human optical flow dataset.
Based on a rendering engine we create a naturalistic 3D indoor environment with textured rooms, characters, actions, objects, illumination and motion blur.
The simulation has as output rendered images of household activities and the corresponding forward and backward optical flow.
arXiv Detail & Related papers (2021-04-16T08:25:20Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.