Learning Optical Flow from a Few Matches
- URL: http://arxiv.org/abs/2104.02166v1
- Date: Mon, 5 Apr 2021 21:44:00 GMT
- Title: Learning Optical Flow from a Few Matches
- Authors: Shihao Jiang, Yao Lu, Hongdong Li, Richard Hartley
- Abstract summary: We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
- Score: 67.83633948984954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art neural network models for optical flow estimation require a
dense correlation volume at high resolutions for representing per-pixel
displacement. Although the dense correlation volume is informative for accurate
estimation, its heavy computation and memory usage hinders the efficient
training and deployment of the models. In this paper, we show that the dense
correlation volume representation is redundant and accurate flow estimation can
be achieved with only a fraction of elements in it. Based on this observation,
we propose an alternative displacement representation, named Sparse Correlation
Volume, which is constructed directly by computing the k closest matches in one
feature map for each feature vector in the other feature map and stored in a
sparse data structure. Experiments show that our method can reduce
computational cost and memory use significantly, while maintaining high
accuracy compared to previous approaches with dense correlation volumes. Code
is available at https://github.com/zacjiang/scv .
Related papers
- Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Large Graph Signal Denoising with Application to Differential Privacy [2.867517731896504]
We consider the case of signal denoising on graphs via a data-driven wavelet tight frame methodology.
We make it scalable to large graphs using Chebyshev-Jackson approximations.
A comprehensive performance analysis is carried out on graphs of varying size, from real and simulated data.
arXiv Detail & Related papers (2022-09-05T16:32:54Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - Effective Streaming Low-tubal-rank Tensor Approximation via Frequent
Directions [9.43704219585568]
This paper extends a popular matrix sketching technique, namely Frequent Directions, for constructing an efficient and accurate low-tubal-rank tensor approximation.
Specifically, the new algorithm allows the tensor data to be observed slice by slice, but only needs to maintain and incrementally update a much smaller sketch.
The rigorous theoretical analysis shows that the approximation error of the new algorithm can be arbitrarily small when the sketch size grows linearly.
arXiv Detail & Related papers (2021-08-23T12:53:44Z) - Featurized Density Ratio Estimation [82.40706152910292]
In our work, we propose to leverage an invertible generative model to map the two distributions into a common feature space prior to estimation.
This featurization brings the densities closer together in latent space, sidestepping pathological scenarios where the learned density ratios in input space can be arbitrarily inaccurate.
At the same time, the invertibility of our feature map guarantees that the ratios computed in feature space are equivalent to those in input space.
arXiv Detail & Related papers (2021-07-05T18:30:26Z) - Scalable Vector Gaussian Information Bottleneck [19.21005180893519]
We study a variation of the problem, called scalable information bottleneck, in which the encoder outputs multiple descriptions of the observation.
We derive a variational inference type algorithm for general sources with unknown distribution; and show means of parametrizing it using neural networks.
arXiv Detail & Related papers (2021-02-15T12:51:26Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Relative gradient optimization of the Jacobian term in unsupervised deep
learning [9.385902422987677]
Learning expressive probabilistic models correctly describing the data is a ubiquitous problem in machine learning.
Deep density models have been widely used for this task, but their maximum likelihood based training requires estimating the log-determinant of the Jacobian.
We propose a new approach for exact training of such neural networks.
arXiv Detail & Related papers (2020-06-26T16:41:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.