STaRFlow: A SpatioTemporal Recurrent Cell for Lightweight Multi-Frame
Optical Flow Estimation
- URL: http://arxiv.org/abs/2007.05481v1
- Date: Fri, 10 Jul 2020 17:01:34 GMT
- Title: STaRFlow: A SpatioTemporal Recurrent Cell for Lightweight Multi-Frame
Optical Flow Estimation
- Authors: Pierre Godet, Alexandre Boulch, Aur\'elien Plyer and Guy Le Besnerais
- Abstract summary: We present a new lightweight CNN-based algorithm for multi-frame optical flow estimation.
The resulting STaRFlow algorithm gives state-of-the-art performances on MPI Sintel and Kitti2015.
- Score: 64.99259320624148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new lightweight CNN-based algorithm for multi-frame optical flow
estimation. Our solution introduces a double recurrence over spatial scale and
time through repeated use of a generic "STaR" (SpatioTemporal Recurrent) cell.
It includes (i) a temporal recurrence based on conveying learned features
rather than optical flow estimates; (ii) an occlusion detection process which
is coupled with optical flow estimation and therefore uses a very limited
number of extra parameters. The resulting STaRFlow algorithm gives
state-of-the-art performances on MPI Sintel and Kitti2015 and involves
significantly less parameters than all other methods with comparable results.
Related papers
- Robust Optical Flow Computation: A Higher-Order Differential Approach [0.0]
This research proposes an innovative algorithm for optical flow computation, utilizing the higher precision of second-order Taylor series approximation.
An impressive showcase of the algorithm's capabilities emerges through its performance on optical flow benchmarks such as KITTI and Middlebury.
arXiv Detail & Related papers (2024-10-12T15:20:11Z) - StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video
Sequences [31.210626775505407]
Occlusions between consecutive frames have long posed a significant challenge in optical flow estimation.
We present a Streamlined In-batch Multi-frame (SIM) pipeline tailored to video input, attaining a similar level of time efficiency to two-frame networks.
StreamFlow not only excels in terms of performance on challenging KITTI and Sintel datasets, with particular improvement in occluded areas.
arXiv Detail & Related papers (2023-11-28T07:53:51Z) - SSTM: Spatiotemporal Recurrent Transformers for Multi-frame Optical Flow
Estimation [0.0]
In optical flow estimates in and near cluded regions, and out-of-boundary regions are two of the current significant limitations of optical flow estimation algorithms.
Recent state-of-the-art optical flow estimation algorithms are two-frame based methods where optical flow is estimated sequentially for each consecutive image pair in a sequence.
We propose a learning-based multi-frame optical flow estimation method that estimates two or more consecutive optical flows in parallel from multi-frame image sequences.
arXiv Detail & Related papers (2023-04-26T23:39:40Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Optical-Flow-Reuse-Based Bidirectional Recurrent Network for Space-Time
Video Super-Resolution [52.899234731501075]
Space-time video super-resolution (ST-VSR) simultaneously increases the spatial resolution and frame rate for a given video.
Existing methods typically suffer from difficulties in how to efficiently leverage information from a large range of neighboring frames.
We propose a coarse-to-fine bidirectional recurrent neural network instead of using ConvLSTM to leverage knowledge between adjacent frames.
arXiv Detail & Related papers (2021-10-13T15:21:30Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Normalized Convolution Upsampling for Refined Optical Flow Estimation [23.652615797842085]
Normalized Convolution UPsampler (NCUP) is an efficient joint upsampling approach to produce the full-resolution flow during the training of optical flow CNNs.
Our proposed approach formulates the upsampling task as a sparse problem and employs the normalized convolutional neural networks to solve it.
We achieve state-of-the-art results on Sintel benchmark with 6% error reduction, and on-par on the KITTI dataset, while having 7.5% fewer parameters.
arXiv Detail & Related papers (2021-02-13T18:34:03Z) - FPCR-Net: Feature Pyramidal Correlation and Residual Reconstruction for
Optical Flow Estimation [72.41370576242116]
We propose a semi-supervised Feature Pyramidal Correlation and Residual Reconstruction Network (FPCR-Net) for optical flow estimation from frame pairs.
It consists of two main modules: pyramid correlation mapping and residual reconstruction.
Experiment results show that the proposed scheme achieves the state-of-the-art performance, with improvement by 0.80, 1.15 and 0.10 in terms of average end-point error (AEE) against competing baseline methods.
arXiv Detail & Related papers (2020-01-17T07:13:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.