Secrets of Event-Based Optical Flow
- URL: http://arxiv.org/abs/2207.10022v2
- Date: Thu, 21 Jul 2022 17:26:51 GMT
- Title: Secrets of Event-Based Optical Flow
- Authors: Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
- Abstract summary: Event cameras respond to scene dynamics and offer advantages to estimate motion.
We develop a principled method to extend the Contrast Maximization framework to estimate optical flow from events alone.
Our method ranks first among unsupervised methods on the MVSEC benchmark, and is competitive on the DSEC benchmark.
- Score: 13.298845944779108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras respond to scene dynamics and offer advantages to estimate
motion. Following recent image-based deep-learning achievements, optical flow
estimation methods for event cameras have rushed to combine those image-based
methods with event data. However, it requires several adaptations (data
conversion, loss function, etc.) as they have very different properties. We
develop a principled method to extend the Contrast Maximization framework to
estimate optical flow from events alone. We investigate key elements: how to
design the objective function to prevent overfitting, how to warp events to
deal better with occlusions, and how to improve convergence with multi-scale
raw events. With these key elements, our method ranks first among unsupervised
methods on the MVSEC benchmark, and is competitive on the DSEC benchmark.
Moreover, our method allows us to expose the issues of the ground truth flow in
those benchmarks, and produces remarkable results when it is transferred to
unsupervised learning settings. Our code is available at
https://github.com/tub-rip/event_based_optical_flow
Related papers
- Event Camera Data Dense Pre-training [10.918407820258246]
This paper introduces a self-supervised learning framework designed for pre-training neural networks tailored to dense prediction tasks using event camera data.
For training our framework, we curate a synthetic event camera dataset featuring diverse scene and motion patterns.
arXiv Detail & Related papers (2023-11-20T04:36:19Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [88.33470650615162]
Moving object segmentation (MOS) in dynamic scenes is an important, challenging, but under-explored research topic for autonomous driving.
Most segmentation methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - Passive Non-line-of-sight Imaging for Moving Targets with an Event
Camera [0.0]
Non-line-of-sight (NLOS) imaging is an emerging technique for detecting objects behind obstacles or around corners.
Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods.
We propose a novel event-based passive NLOS imaging method.
arXiv Detail & Related papers (2022-09-27T10:56:14Z) - Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation [57.22705137545853]
We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
arXiv Detail & Related papers (2021-09-06T17:31:37Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z) - Single Image Optical Flow Estimation with an Event Camera [38.92408855196647]
Event cameras are bio-inspired sensors that report intensity changes in microsecond resolution.
We propose a single image (potentially blurred) and events based optical flow estimation approach.
arXiv Detail & Related papers (2020-04-01T11:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.