AccFlow: Backward Accumulation for Long-Range Optical Flow
- URL: http://arxiv.org/abs/2308.13133v1
- Date: Fri, 25 Aug 2023 01:51:26 GMT
- Title: AccFlow: Backward Accumulation for Long-Range Optical Flow
- Authors: Guangyang Wu, Xiaohong Liu, Kunming Luo, Xi Liu, Qingqing Zheng,
Shuaicheng Liu, Xinyang Jiang, Guangtao Zhai, Wenyi Wang
- Abstract summary: This paper proposes a novel recurrent framework called AccFlow for long-range optical flow estimation.
We demonstrate the superiority of backward accumulation over conventional forward accumulation.
Experiments validate the effectiveness of AccFlow in handling long-range optical flow estimation.
- Score: 70.4251045372285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent deep learning-based optical flow estimators have exhibited impressive
performance in generating local flows between consecutive frames. However, the
estimation of long-range flows between distant frames, particularly under
complex object deformation and large motion occlusion, remains a challenging
task. One promising solution is to accumulate local flows explicitly or
implicitly to obtain the desired long-range flow. Nevertheless, the
accumulation errors and flow misalignment can hinder the effectiveness of this
approach. This paper proposes a novel recurrent framework called AccFlow, which
recursively backward accumulates local flows using a deformable module called
as AccPlus. In addition, an adaptive blending module is designed along with
AccPlus to alleviate the occlusion effect by backward accumulation and rectify
the accumulation error. Notably, we demonstrate the superiority of backward
accumulation over conventional forward accumulation, which to the best of our
knowledge has not been explicitly established before. To train and evaluate the
proposed AccFlow, we have constructed a large-scale high-quality dataset named
CVO, which provides ground-truth optical flow labels between adjacent and
distant frames. Extensive experiments validate the effectiveness of AccFlow in
handling long-range optical flow estimation. Codes are available at
https://github.com/mulns/AccFlow .
Related papers
- Constant Acceleration Flow [13.49794130678208]
Rectified flow and reflow procedures have advanced fast generation by progressively straightening ordinary differential equation (ODE) flows.
They operate under the assumption that image and noise pairs, known as couplings, can be approximated by straight trajectories with constant velocity.
We introduce Constant Acceleration Flow (CAF), a novel framework based on a simple constant acceleration equation.
arXiv Detail & Related papers (2024-11-01T02:43:56Z) - FlowIE: Efficient Image Enhancement via Rectified Flow [71.6345505427213]
FlowIE is a flow-based framework that estimates straight-line paths from an elementary distribution to high-quality images.
Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-06-01T17:29:29Z) - StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video
Sequences [31.210626775505407]
Occlusions between consecutive frames have long posed a significant challenge in optical flow estimation.
We present a Streamlined In-batch Multi-frame (SIM) pipeline tailored to video input, attaining a similar level of time efficiency to two-frame networks.
StreamFlow not only excels in terms of performance on challenging KITTI and Sintel datasets, with particular improvement in occluded areas.
arXiv Detail & Related papers (2023-11-28T07:53:51Z) - GAFlow: Incorporating Gaussian Attention into Optical Flow [62.646389181507764]
We push Gaussian Attention (GA) into the optical flow models to accentuate local properties during representation learning.
We introduce a novel Gaussian-Constrained Layer (GCL) which can be easily plugged into existing Transformer blocks.
For reliable motion analysis, we provide a new Gaussian-Guided Attention Module (GGAM)
arXiv Detail & Related papers (2023-09-28T07:46:01Z) - Normalizing flow neural networks by JKO scheme [22.320632565424745]
We develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto scheme.
The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks.
Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance.
arXiv Detail & Related papers (2022-12-29T18:55:00Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - LiteFlowNet3: Resolving Correspondence Ambiguity for More Accurate
Optical Flow Estimation [99.19322851246972]
We introduce LiteFlowNet3, a deep network consisting of two specialized modules to address the problem of optical flow estimation.
LiteFlowNet3 not only achieves promising results on public benchmarks but also has a small model size and a fast runtime.
arXiv Detail & Related papers (2020-07-18T03:30:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.