Learning Optical Flow with Adaptive Graph Reasoning
- URL: http://arxiv.org/abs/2202.03857v1
- Date: Tue, 8 Feb 2022 13:41:20 GMT
- Title: Learning Optical Flow with Adaptive Graph Reasoning
- Authors: Ao Luo, Fan Yang, Kunming Luo, Xin Li, Haoqiang Fan, Shuaicheng Liu
- Abstract summary: Estimating per-pixel motion between video frames, known as optical flow, is a long-standing problem in video understanding and analysis.
We introduce a novel graph-based approach, called adaptive graph reasoning for optical flow (AGFlow), to emphasize the value of scene/context information in optical flow.
AGFlow achieves the best accuracy with EPE of 1.43 and 2.47 pixels, outperforming state-of-the-art approaches by 11.2% and 13.6%, respectively.
- Score: 35.348449774221656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating per-pixel motion between video frames, known as optical flow, is a
long-standing problem in video understanding and analysis. Most contemporary
optical flow techniques largely focus on addressing the cross-image matching
with feature similarity, with few methods considering how to explicitly reason
over the given scene for achieving a holistic motion understanding. In this
work, taking a fresh perspective, we introduce a novel graph-based approach,
called adaptive graph reasoning for optical flow (AGFlow), to emphasize the
value of scene/context information in optical flow. Our key idea is to decouple
the context reasoning from the matching procedure, and exploit scene
information to effectively assist motion estimation by learning to reason over
the adaptive graph. The proposed AGFlow can effectively exploit the context
information and incorporate it within the matching procedure, producing more
robust and accurate results. On both Sintel clean and final passes, our AGFlow
achieves the best accuracy with EPE of 1.43 and 2.47 pixels, outperforming
state-of-the-art approaches by 11.2% and 13.6%, respectively.
Related papers
- MemFlow: Optical Flow Estimation and Prediction with Memory [54.22820729477756]
We present MemFlow, a real-time method for optical flow estimation and prediction with memory.
Our method enables memory read-out and update modules for aggregating historical motion information in real-time.
Our approach seamlessly extends to the future prediction of optical flow based on past observations.
arXiv Detail & Related papers (2024-04-07T04:56:58Z) - OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - OptFlow: Fast Optimization-based Scene Flow Estimation without
Supervision [6.173968909465726]
We present OptFlow, a fast optimization-based scene flow estimation method.
It achieves state-of-the-art performance for scene flow estimation on popular autonomous driving benchmarks.
arXiv Detail & Related papers (2024-01-04T21:47:56Z) - TransFlow: Transformer as Flow Learner [22.727953339383344]
We propose TransFlow, a pure transformer architecture for optical flow estimation.
It provides more accurate correlation and trustworthy matching in flow estimation.
It recovers more compromised information in flow estimation through long-range temporal association in dynamic scenes.
arXiv Detail & Related papers (2023-04-23T03:11:23Z) - Rethinking Optical Flow from Geometric Matching Consistent Perspective [38.014569953980754]
We propose a rethinking to previous optical flow estimation.
We use GIM as a pre-training task for the optical flow estimation (MatchFlow) with better feature representations.
Our method achieves 11.5% and 10.1% error reduction from GMA on Sintel clean pass and KITTI test set.
arXiv Detail & Related papers (2023-03-15T06:00:38Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - Unsupervised Motion Representation Enhanced Network for Action
Recognition [4.42249337449125]
Motion representation between consecutive frames has proven to have great promotion to video understanding.
TV-L1 method, an effective optical flow solver, is time-consuming and expensive in storage for caching the extracted optical flow.
We propose UF-TSN, a novel end-to-end action recognition approach enhanced with an embedded lightweight unsupervised optical flow estimator.
arXiv Detail & Related papers (2021-03-05T04:14:32Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Single Image Optical Flow Estimation with an Event Camera [38.92408855196647]
Event cameras are bio-inspired sensors that report intensity changes in microsecond resolution.
We propose a single image (potentially blurred) and events based optical flow estimation approach.
arXiv Detail & Related papers (2020-04-01T11:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.