CSFlow: Learning Optical Flow via Cross Strip Correlation for Autonomous
Driving
- URL: http://arxiv.org/abs/2202.00909v1
- Date: Wed, 2 Feb 2022 08:17:45 GMT
- Title: CSFlow: Learning Optical Flow via Cross Strip Correlation for Autonomous
Driving
- Authors: Hao Shi, Yifan Zhou, Kailun Yang, Xiaoting Yin, Kaiwei Wang
- Abstract summary: Cross Strip Correlation module (CSC) and Correlation Regression Initialization module (CRI)
CSFlow consists of two novel modules: Cross Strip Correlation module (CSC) and Correlation Regression Initialization module (CRI)
- Score: 9.562270891742982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical flow estimation is an essential task in self-driving systems, which
helps autonomous vehicles perceive temporal continuity information of
surrounding scenes. The calculation of all-pair correlation plays an important
role in many existing state-of-the-art optical flow estimation methods.
However, the reliance on local knowledge often limits the model's accuracy
under complex street scenes. In this paper, we propose a new deep network
architecture for optical flow estimation in autonomous driving--CSFlow, which
consists of two novel modules: Cross Strip Correlation module (CSC) and
Correlation Regression Initialization module (CRI). CSC utilizes a striping
operation across the target image and the attended image to encode global
context into correlation volumes, while maintaining high efficiency. CRI is
used to maximally exploit the global context for optical flow initialization.
Our method has achieved state-of-the-art accuracy on the public autonomous
driving dataset KITTI-2015. Code is publicly available at
https://github.com/MasterHow/CSFlow.
Related papers
- SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving [18.88208422580103]
Scene flow estimation predicts the 3D motion at each point in successive LiDAR scans.
Current state-of-the-art methods require annotated data to train scene flow networks.
We propose SeFlow, a self-supervised method that integrates efficient dynamic classification into a learning-based scene flow pipeline.
arXiv Detail & Related papers (2024-07-01T18:22:54Z) - OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - GAFlow: Incorporating Gaussian Attention into Optical Flow [62.646389181507764]
We push Gaussian Attention (GA) into the optical flow models to accentuate local properties during representation learning.
We introduce a novel Gaussian-Constrained Layer (GCL) which can be easily plugged into existing Transformer blocks.
For reliable motion analysis, we provide a new Gaussian-Guided Attention Module (GGAM)
arXiv Detail & Related papers (2023-09-28T07:46:01Z) - TransFlow: Transformer as Flow Learner [22.727953339383344]
We propose TransFlow, a pure transformer architecture for optical flow estimation.
It provides more accurate correlation and trustworthy matching in flow estimation.
It recovers more compromised information in flow estimation through long-range temporal association in dynamic scenes.
arXiv Detail & Related papers (2023-04-23T03:11:23Z) - SemARFlow: Injecting Semantics into Unsupervised Optical Flow Estimation
for Autonomous Driving [5.342413115295559]
We introduce SemARFlow, an unsupervised optical flow network designed for autonomous driving data.
We show visible improvements around object boundaries as well as a greater ability to generalize across datasets.
arXiv Detail & Related papers (2023-03-10T21:17:14Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - STJLA: A Multi-Context Aware Spatio-Temporal Joint Linear Attention
Network for Traffic Forecasting [7.232141271583618]
We propose a novel deep learning model for traffic forecasting named inefficient-Context Spatio-Temporal Joint Linear Attention (SSTLA)
SSTLA applies linear attention to a joint graph to capture global dependence between alltemporal- nodes efficiently.
Experiments on two real-world traffic datasets, England and Temporal7, demonstrate that our STJLA can achieve 9.83% and 3.08% 3.08% accuracy in MAE measure over state-of-the-art baselines.
arXiv Detail & Related papers (2021-12-04T06:39:18Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - AutoFlow: Learning a Better Training Set for Optical Flow [62.40293188964933]
AutoFlow is a method to render training data for optical flow.
AutoFlow achieves state-of-the-art accuracy in pre-training both PWC-Net and RAFT.
arXiv Detail & Related papers (2021-04-29T17:55:23Z) - FlowStep3D: Model Unrolling for Self-Supervised Scene Flow Estimation [87.74617110803189]
Estimating the 3D motion of points in a scene, known as scene flow, is a core problem in computer vision.
We present a recurrent architecture that learns a single step of an unrolled iterative alignment procedure for refining scene flow predictions.
arXiv Detail & Related papers (2020-11-19T23:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.