Normalized Convolution Upsampling for Refined Optical Flow Estimation
- URL: http://arxiv.org/abs/2102.06979v1
- Date: Sat, 13 Feb 2021 18:34:03 GMT
- Title: Normalized Convolution Upsampling for Refined Optical Flow Estimation
- Authors: Abdelrahman Eldesokey, Michael Felsberg
- Abstract summary: Normalized Convolution UPsampler (NCUP) is an efficient joint upsampling approach to produce the full-resolution flow during the training of optical flow CNNs.
Our proposed approach formulates the upsampling task as a sparse problem and employs the normalized convolutional neural networks to solve it.
We achieve state-of-the-art results on Sintel benchmark with 6% error reduction, and on-par on the KITTI dataset, while having 7.5% fewer parameters.
- Score: 23.652615797842085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical flow is a regression task where convolutional neural networks (CNNs)
have led to major breakthroughs. However, this comes at major computational
demands due to the use of cost-volumes and pyramidal representations. This was
mitigated by producing flow predictions at quarter the resolution, which are
upsampled using bilinear interpolation during test time. Consequently, fine
details are usually lost and post-processing is needed to restore them. We
propose the Normalized Convolution UPsampler (NCUP), an efficient joint
upsampling approach to produce the full-resolution flow during the training of
optical flow CNNs. Our proposed approach formulates the upsampling task as a
sparse problem and employs the normalized convolutional neural networks to
solve it. We evaluate our upsampler against existing joint upsampling
approaches when trained end-to-end with a a coarse-to-fine optical flow CNN
(PWCNet) and we show that it outperforms all other approaches on the
FlyingChairs dataset while having at least one order fewer parameters.
Moreover, we test our upsampler with a recurrent optical flow CNN (RAFT) and we
achieve state-of-the-art results on Sintel benchmark with ~6% error reduction,
and on-par on the KITTI dataset, while having 7.5% fewer parameters (see Figure
1). Finally, our upsampler shows better generalization capabilities than RAFT
when trained and evaluated on different datasets.
Related papers
- Efficient NeRF Optimization -- Not All Samples Remain Equally Hard [9.404889815088161]
We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF)
NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources.
arXiv Detail & Related papers (2024-08-06T13:49:01Z) - Adaptive Heterogeneous Client Sampling for Federated Learning over Wireless Networks [27.545199007002577]
Federated learning (FL) algorithms sample a fraction of clients in each round (partial participation) when the number of participants is large.
Recent convergence analysis of FL have focused on slow-clock convergence due to client heterogeneity.
We propose a new tractable convergence system for FL with arbitrary probability sampling.
arXiv Detail & Related papers (2024-04-22T00:16:18Z) - Adaptive Sampling for Deep Learning via Efficient Nonparametric Proxies [35.29595714883275]
We develop an efficient sketch-based approximation to the Nadaraya-Watson estimator.
Our sampling algorithm outperforms the baseline in terms of wall-clock time and accuracy on four datasets.
arXiv Detail & Related papers (2023-11-22T18:40:18Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Tackling System and Statistical Heterogeneity for Federated Learning
with Adaptive Client Sampling [34.187387951367526]
Federated learning (FL) algorithms usually sample a fraction in each (partial participation) when the number of participants is large.
Recent works have focused on the convergence analysis of FL.
We obtain new convergence bound for FL algorithms with arbitrary client sampling probabilities.
arXiv Detail & Related papers (2021-12-21T14:28:40Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - STaRFlow: A SpatioTemporal Recurrent Cell for Lightweight Multi-Frame
Optical Flow Estimation [64.99259320624148]
We present a new lightweight CNN-based algorithm for multi-frame optical flow estimation.
The resulting STaRFlow algorithm gives state-of-the-art performances on MPI Sintel and Kitti2015.
arXiv Detail & Related papers (2020-07-10T17:01:34Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.