DIP: Deep Inverse Patchmatch for High-Resolution Optical Flow
- URL: http://arxiv.org/abs/2204.00330v1
- Date: Fri, 1 Apr 2022 10:13:59 GMT
- Title: DIP: Deep Inverse Patchmatch for High-Resolution Optical Flow
- Authors: Zihua Zheng, Ni Nie, Zhi Ling, Pengfei Xiong, Jiangyu Liu, Hao Wang,
Jiankun Li
- Abstract summary: We propose a novel Patchmatch-based framework to work on high-resolution optical flow estimation.
It can get high-precision results with lower memory benefiting from propagation and local search of Patchmatch.
Our method ranks first on all the metrics on the popular KITTI2015 benchmark, and ranks second on EPE on the Sintel clean benchmark among published optical flow methods.
- Score: 7.73554718719193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the dense correlation volume method achieves state-of-the-art
performance in optical flow. However, the correlation volume computation
requires a lot of memory, which makes prediction difficult on high-resolution
images. In this paper, we propose a novel Patchmatch-based framework to work on
high-resolution optical flow estimation. Specifically, we introduce the first
end-to-end Patchmatch based deep learning optical flow. It can get
high-precision results with lower memory benefiting from propagation and local
search of Patchmatch. Furthermore, a new inverse propagation is proposed to
decouple the complex operations of propagation, which can significantly reduce
calculations in multiple iterations. At the time of submission, our method
ranks first on all the metrics on the popular KITTI2015 benchmark, and ranks
second on EPE on the Sintel clean benchmark among published optical flow
methods. Experiment shows our method has a strong cross-dataset generalization
ability that the F1-all achieves 13.73%, reducing 21% from the best published
result 17.4% on KITTI2015. What's more, our method shows a good details
preserving result on the high-resolution dataset DAVIS and consumes 2x less
memory than RAFT.
Related papers
- HMAFlow: Learning More Accurate Optical Flow via Hierarchical Motion Field Alignment [0.5825410941577593]
We present a novel method, dubbed HMAFlow, to improve optical flow estimation in challenging scenes.
The proposed model mainly consists of two core components: a Hierarchical Motion Field Alignment (HMA) module and a Correlation Self-Attention (CSA) module.
Experimental results demonstrate that our model achieves the best generalization performance compared to other state-of-the-art methods.
arXiv Detail & Related papers (2024-09-09T11:43:35Z) - Rethinking Optical Flow from Geometric Matching Consistent Perspective [38.014569953980754]
We propose a rethinking to previous optical flow estimation.
We use GIM as a pre-training task for the optical flow estimation (MatchFlow) with better feature representations.
Our method achieves 11.5% and 10.1% error reduction from GMA on Sintel clean pass and KITTI test set.
arXiv Detail & Related papers (2023-03-15T06:00:38Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - High-Resolution Optical Flow from 1D Attention and Correlation [89.61824964952949]
We propose a new method for high-resolution optical flow estimation with significantly less computation.
We first perform a 1D attention operation in the vertical direction of the target image, and then a simple 1D correlation in the horizontal direction of the attended image.
Experiments on Sintel, KITTI and real-world 4K resolution images demonstrated the effectiveness and superiority of our proposed method.
arXiv Detail & Related papers (2021-04-28T17:56:34Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning [34.580309867067946]
We present an unsupervised learning approach for optical flow estimation.
We design a self-guided upsample module to tackle the blur problem caused by bilinear upsampling between pyramid levels.
We propose a pyramid distillation loss to add supervision for intermediate levels via distilling the finest flow as pseudo labels.
arXiv Detail & Related papers (2020-12-01T01:57:46Z) - ScopeFlow: Dynamic Scene Scoping for Optical Flow [94.42139459221784]
We propose to modify the common training protocols of optical flow.
The improvement is based on observing the bias in sampling challenging data.
We find that both regularization and augmentation should decrease during the training protocol.
arXiv Detail & Related papers (2020-02-25T09:58:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.