GAFlow: Incorporating Gaussian Attention into Optical Flow
- URL: http://arxiv.org/abs/2309.16217v1
- Date: Thu, 28 Sep 2023 07:46:01 GMT
- Title: GAFlow: Incorporating Gaussian Attention into Optical Flow
- Authors: Ao Luo, Fan Yang, Xin Li, Lang Nie, Chunyu Lin, Haoqiang Fan,
Shuaicheng Liu
- Abstract summary: We push Gaussian Attention (GA) into the optical flow models to accentuate local properties during representation learning.
We introduce a novel Gaussian-Constrained Layer (GCL) which can be easily plugged into existing Transformer blocks.
For reliable motion analysis, we provide a new Gaussian-Guided Attention Module (GGAM)
- Score: 62.646389181507764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical flow, or the estimation of motion fields from image sequences, is one
of the fundamental problems in computer vision. Unlike most pixel-wise tasks
that aim at achieving consistent representations of the same category, optical
flow raises extra demands for obtaining local discrimination and smoothness,
which yet is not fully explored by existing approaches. In this paper, we push
Gaussian Attention (GA) into the optical flow models to accentuate local
properties during representation learning and enforce the motion affinity
during matching. Specifically, we introduce a novel Gaussian-Constrained Layer
(GCL) which can be easily plugged into existing Transformer blocks to highlight
the local neighborhood that contains fine-grained structural information.
Moreover, for reliable motion analysis, we provide a new Gaussian-Guided
Attention Module (GGAM) which not only inherits properties from Gaussian
distribution to instinctively revolve around the neighbor fields of each point
but also is empowered to put the emphasis on contextually related regions
during matching. Our fully-equipped model, namely Gaussian Attention Flow
network (GAFlow), naturally incorporates a series of novel Gaussian-based
modules into the conventional optical flow framework for reliable motion
analysis. Extensive experiments on standard optical flow datasets consistently
demonstrate the exceptional performance of the proposed approach in terms of
both generalization ability evaluation and online benchmark testing. Code is
available at https://github.com/LA30/GAFlow.
Related papers
- OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - Hierarchical Graph Pattern Understanding for Zero-Shot VOS [102.21052200245457]
This paper proposes a new hierarchical graph neural network (GNN) architecture for zero-shot video object segmentation (ZS-VOS)
Inspired by the strong ability of GNNs in capturing structural relations, HGPU innovatively leverages motion cues (ie, optical flow) to enhance the high-order representations from the neighbors of target frames.
arXiv Detail & Related papers (2023-12-15T04:13:21Z) - Identification of vortex in unstructured mesh with graph neural networks [0.0]
We present a Graph Neural Network (GNN) based model with U-Net architecture to identify the vortex in CFD results on unstructured meshes.
A vortex auto-labeling method is proposed to label vortex regions in 2D CFD meshes.
arXiv Detail & Related papers (2023-11-11T12:10:16Z) - EM-driven unsupervised learning for efficient motion segmentation [3.5232234532568376]
This paper presents a CNN-based fully unsupervised method for motion segmentation from optical flow.
We use the Expectation-Maximization (EM) framework to leverage the loss function and the training procedure of our motion segmentation neural network.
Our method outperforms comparable unsupervised methods and is very efficient.
arXiv Detail & Related papers (2022-01-06T14:35:45Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Sensor-Guided Optical Flow [53.295332513139925]
This paper proposes a framework to guide an optical flow network with external cues to achieve superior accuracy on known or unseen domains.
We show how these can be obtained by combining depth measurements from active sensors with geometry and hand-crafted optical flow algorithms.
arXiv Detail & Related papers (2021-09-30T17:59:57Z) - CaloFlow: Fast and Accurate Generation of Calorimeter Showers with
Normalizing Flows [0.0]
We introduce CaloFlow, a fast detector simulation framework based on normalizing flows.
For the first time, we demonstrate that normalizing flows can reproduce many-channel calorimeter showers with extremely high fidelity.
arXiv Detail & Related papers (2021-06-09T18:00:02Z) - Unsupervised Motion Representation Enhanced Network for Action
Recognition [4.42249337449125]
Motion representation between consecutive frames has proven to have great promotion to video understanding.
TV-L1 method, an effective optical flow solver, is time-consuming and expensive in storage for caching the extracted optical flow.
We propose UF-TSN, a novel end-to-end action recognition approach enhanced with an embedded lightweight unsupervised optical flow estimator.
arXiv Detail & Related papers (2021-03-05T04:14:32Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.