MemFlow: Optical Flow Estimation and Prediction with Memory
- URL: http://arxiv.org/abs/2404.04808v1
- Date: Sun, 7 Apr 2024 04:56:58 GMT
- Title: MemFlow: Optical Flow Estimation and Prediction with Memory
- Authors: Qiaole Dong, Yanwei Fu,
- Abstract summary: We present MemFlow, a real-time method for optical flow estimation and prediction with memory.
Our method enables memory read-out and update modules for aggregating historical motion information in real-time.
Our approach seamlessly extends to the future prediction of optical flow based on past observations.
- Score: 54.22820729477756
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical flow is a classical task that is important to the vision community. Classical optical flow estimation uses two frames as input, whilst some recent methods consider multiple frames to explicitly model long-range information. The former ones limit their ability to fully leverage temporal coherence along the video sequence; and the latter ones incur heavy computational overhead, typically not possible for real-time flow estimation. Some multi-frame-based approaches even necessitate unseen future frames for current estimation, compromising real-time applicability in safety-critical scenarios. To this end, we present MemFlow, a real-time method for optical flow estimation and prediction with memory. Our method enables memory read-out and update modules for aggregating historical motion information in real-time. Furthermore, we integrate resolution-adaptive re-scaling to accommodate diverse video resolutions. Besides, our approach seamlessly extends to the future prediction of optical flow based on past observations. Leveraging effective historical motion aggregation, our method outperforms VideoFlow with fewer parameters and faster inference speed on Sintel and KITTI-15 datasets in terms of generalization performance. At the time of submission, MemFlow also leads in performance on the 1080p Spring dataset. Codes and models will be available at: https://dqiaole.github.io/MemFlow/.
Related papers
- ScaleFlow++: Robust and Accurate Estimation of 3D Motion from Video [26.01796507893086]
This paper proposes a 3D motion perception method called ScaleFlow++ that is easy to generalize.
With just a pair of RGB images, ScaleFlow++ can robustly estimate optical flow and motion-in-depth (MID)
On KITTI, ScaleFlow++ achieved the best monocular scene flow estimation performance, reducing SF-all from 6.21 to 5.79.
arXiv Detail & Related papers (2024-09-16T11:59:27Z) - ScaleFlow++: Robust and Accurate Estimation of 3D Motion from Video [15.629496237910999]
This paper proposes a 3D motion perception method called ScaleFlow++ that is easy to generalize.
With just a pair of RGB images, ScaleFlow++ can robustly estimate optical flow and motion-in-depth (MID)
On KITTI, ScaleFlow++ achieved the best monocular scene flow estimation performance, reducing SF-all from 6.21 to 5.79.
arXiv Detail & Related papers (2024-07-13T07:58:48Z) - OptFlow: Fast Optimization-based Scene Flow Estimation without
Supervision [6.173968909465726]
We present OptFlow, a fast optimization-based scene flow estimation method.
It achieves state-of-the-art performance for scene flow estimation on popular autonomous driving benchmarks.
arXiv Detail & Related papers (2024-01-04T21:47:56Z) - StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video
Sequences [31.210626775505407]
Occlusions between consecutive frames have long posed a significant challenge in optical flow estimation.
We present a Streamlined In-batch Multi-frame (SIM) pipeline tailored to video input, attaining a similar level of time efficiency to two-frame networks.
StreamFlow not only excels in terms of performance on challenging KITTI and Sintel datasets, with particular improvement in occluded areas.
arXiv Detail & Related papers (2023-11-28T07:53:51Z) - VideoFlow: Exploiting Temporal Cues for Multi-frame Optical Flow
Estimation [61.660040308290796]
VideoFlow is a novel optical flow estimation framework for videos.
We first propose a TRi-frame Optical Flow (TROF) module that estimates bi-directional optical flows for the center frame in a three-frame manner.
With the iterative flow estimation refinement, the information fused in individual TROFs can be propagated into the whole sequence via MOP.
arXiv Detail & Related papers (2023-03-15T03:14:30Z) - BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow Estimation [76.66876888943385]
Event cameras provide high temporal precision, low data rates, and high dynamic range visual perception.
We present a novel simulator, BlinkSim, for the fast generation of large-scale data for event-based optical flow.
arXiv Detail & Related papers (2023-03-14T09:03:54Z) - RealFlow: EM-based Realistic Optical Flow Dataset Generation from Videos [28.995525297929348]
RealFlow is a framework that can create large-scale optical flow datasets directly from unlabeled realistic videos.
We first estimate optical flow between a pair of video frames, and then synthesize a new image from this pair based on the predicted flow.
Our approach achieves state-of-the-art performance on two standard benchmarks compared with both supervised and unsupervised optical flow methods.
arXiv Detail & Related papers (2022-07-22T13:33:03Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Real-time Object Detection for Streaming Perception [84.2559631820007]
Streaming perception is proposed to jointly evaluate the latency and accuracy into a single metric for video online perception.
We build a simple and effective framework for streaming perception.
Our method achieves competitive performance on Argoverse-HD dataset and improves the AP by 4.9% compared to the strong baseline.
arXiv Detail & Related papers (2022-03-23T11:33:27Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.