AnyFlow: Arbitrary Scale Optical Flow with Implicit Neural
Representation
- URL: http://arxiv.org/abs/2303.16493v1
- Date: Wed, 29 Mar 2023 07:03:51 GMT
- Title: AnyFlow: Arbitrary Scale Optical Flow with Implicit Neural
Representation
- Authors: Hyunyoung Jung, Zhuo Hui, Lei Luo, Haitao Yang, Feng Liu, Sungjoo Yoo,
Rakesh Ranjan, Denis Demandolx
- Abstract summary: We introduce AnyFlow, a robust network that estimates accurate flow from images of various resolutions.
We establish a new state-of-the-art performance of cross-dataset generalization on the KITTI dataset.
- Score: 17.501820140334328
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To apply optical flow in practice, it is often necessary to resize the input
to smaller dimensions in order to reduce computational costs. However,
downsizing inputs makes the estimation more challenging because objects and
motion ranges become smaller. Even though recent approaches have demonstrated
high-quality flow estimation, they tend to fail to accurately model small
objects and precise boundaries when the input resolution is lowered,
restricting their applicability to high-resolution inputs. In this paper, we
introduce AnyFlow, a robust network that estimates accurate flow from images of
various resolutions. By representing optical flow as a continuous
coordinate-based representation, AnyFlow generates outputs at arbitrary scales
from low-resolution inputs, demonstrating superior performance over prior works
in capturing tiny objects with detail preservation on a wide range of scenes.
We establish a new state-of-the-art performance of cross-dataset generalization
on the KITTI dataset, while achieving comparable accuracy on the online
benchmarks to other SOTA methods.
Related papers
- RMS-FlowNet++: Efficient and Robust Multi-Scale Scene Flow Estimation for Large-Scale Point Clouds [15.138542932078916]
RMS-FlowNet++ is a novel end-to-end learning-based architecture for accurate and efficient scene flow estimation.
Our architecture provides a faster prediction than state-of-the-art methods, avoids high memory requirements and enables efficient scene flow on dense point clouds of more than 250K points at once.
arXiv Detail & Related papers (2024-07-01T09:51:17Z) - FlowIE: Efficient Image Enhancement via Rectified Flow [71.6345505427213]
FlowIE is a flow-based framework that estimates straight-line paths from an elementary distribution to high-quality images.
Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-06-01T17:29:29Z) - OptFlow: Fast Optimization-based Scene Flow Estimation without
Supervision [6.173968909465726]
We present OptFlow, a fast optimization-based scene flow estimation method.
It achieves state-of-the-art performance for scene flow estimation on popular autonomous driving benchmarks.
arXiv Detail & Related papers (2024-01-04T21:47:56Z) - FuzzyFlow: Leveraging Dataflow To Find and Squash Program Optimization
Bugs [92.47146416628965]
FuzzyFlow is a fault localization and test case extraction framework designed to test program optimizations.
We leverage dataflow program representations to capture a fully reproducible system state and area-of-effect for optimizations.
To reduce testing time, we design an algorithm for minimizing test inputs, trading off memory for recomputation.
arXiv Detail & Related papers (2023-06-28T13:00:17Z) - Rethinking Optical Flow from Geometric Matching Consistent Perspective [38.014569953980754]
We propose a rethinking to previous optical flow estimation.
We use GIM as a pre-training task for the optical flow estimation (MatchFlow) with better feature representations.
Our method achieves 11.5% and 10.1% error reduction from GMA on Sintel clean pass and KITTI test set.
arXiv Detail & Related papers (2023-03-15T06:00:38Z) - Taming Contrast Maximization for Learning Sequential, Low-latency,
Event-based Optical Flow [18.335337530059867]
Event cameras have gained significant traction since they open up new avenues for low-latency and low-power solutions to complex computer vision problems.
To unlock these solutions, it is necessary to develop algorithms that can leverage the unique nature of event data.
In this work, we propose a novel self-supervised learning pipeline for the estimation of event-based optical flow.
arXiv Detail & Related papers (2023-03-09T12:37:33Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - Unsupervised Motion Representation Enhanced Network for Action
Recognition [4.42249337449125]
Motion representation between consecutive frames has proven to have great promotion to video understanding.
TV-L1 method, an effective optical flow solver, is time-consuming and expensive in storage for caching the extracted optical flow.
We propose UF-TSN, a novel end-to-end action recognition approach enhanced with an embedded lightweight unsupervised optical flow estimator.
arXiv Detail & Related papers (2021-03-05T04:14:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.