FDFlowNet: Fast Optical Flow Estimation using a Deep Lightweight Network
- URL: http://arxiv.org/abs/2006.12263v1
- Date: Mon, 22 Jun 2020 14:01:01 GMT
- Title: FDFlowNet: Fast Optical Flow Estimation using a Deep Lightweight Network
- Authors: Lingtong Kong, Jie Yang
- Abstract summary: We present a lightweight yet effective model for real-time optical flow estimation, termed FDFlowNet (fast deep flownet)
We achieve better or similar accuracy on the challenging KITTI and Sintel benchmarks while being about 2 times faster than PWC-Net.
- Score: 12.249680550252327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Significant progress has been made for estimating optical flow using deep
neural networks. Advanced deep models achieve accurate flow estimation often
with a considerable computation complexity and time-consuming training
processes. In this work, we present a lightweight yet effective model for
real-time optical flow estimation, termed FDFlowNet (fast deep flownet). We
achieve better or similar accuracy on the challenging KITTI and Sintel
benchmarks while being about 2 times faster than PWC-Net. This is achieved by a
carefully-designed structure and newly proposed components. We first introduce
an U-shape network for constructing multi-scale feature which benefits upper
levels with global receptive field compared with pyramid network. In each
scale, a partial fully connected structure with dilated convolution is proposed
for flow estimation that obtains a good balance among speed, accuracy and
number of parameters compared with sequential connected and dense connected
structures. Experiments demonstrate that our model achieves state-of-the-art
performance while being fast and lightweight.
Related papers
- NeuFlow v2: High-Efficiency Optical Flow Estimation on Edge Devices [6.157420789049589]
We propose a highly efficient optical flow method that balances high accuracy with reduced computational demands.
We introduce new components including a much more light-weight backbone and a fast refinement module.
Our model achieves a 10x-70x speedup while maintaining comparable performance on both synthetic and real-world data.
arXiv Detail & Related papers (2024-08-19T17:13:34Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video
Sequences [31.210626775505407]
Occlusions between consecutive frames have long posed a significant challenge in optical flow estimation.
We present a Streamlined In-batch Multi-frame (SIM) pipeline tailored to video input, attaining a similar level of time efficiency to two-frame networks.
StreamFlow not only excels in terms of performance on challenging KITTI and Sintel datasets, with particular improvement in occluded areas.
arXiv Detail & Related papers (2023-11-28T07:53:51Z) - Rethinking Lightweight Salient Object Detection via Network Depth-Width
Tradeoff [26.566339984225756]
Existing salient object detection methods often adopt deeper and wider networks for better performance.
We propose a novel trilateral decoder framework by decoupling the U-shape structure into three complementary branches.
We show that our method achieves better efficiency-accuracy balance across five benchmarks.
arXiv Detail & Related papers (2023-01-17T03:43:25Z) - FlowNAS: Neural Architecture Search for Optical Flow Estimation [65.44079917247369]
We propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
Experimental results show that the discovered architecture with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI.
arXiv Detail & Related papers (2022-07-04T09:05:25Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Toward fast and accurate human pose estimation via soft-gated skip
connections [97.06882200076096]
This paper is on highly accurate and highly efficient human pose estimation.
We re-analyze this design choice in the context of improving both the accuracy and the efficiency over the state-of-the-art.
Our model achieves state-of-the-art results on the MPII and LSP datasets.
arXiv Detail & Related papers (2020-02-25T18:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.