Deep Equilibrium Optical Flow Estimation
- URL: http://arxiv.org/abs/2204.08442v1
- Date: Mon, 18 Apr 2022 17:53:44 GMT
- Title: Deep Equilibrium Optical Flow Estimation
- Authors: Shaojie Bai, Zhengyang Geng, Yash Savani, J. Zico Kolter
- Abstract summary: Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
- Score: 80.80992684796566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many recent state-of-the-art (SOTA) optical flow models use finite-step
recurrent update operations to emulate traditional algorithms by encouraging
iterative refinements toward a stable flow estimation. However, these RNNs
impose large computation and memory overheads, and are not directly trained to
model such stable estimation. They can converge poorly and thereby suffer from
performance degradation. To combat these drawbacks, we propose deep equilibrium
(DEQ) flow estimators, an approach that directly solves for the flow as the
infinite-level fixed point of an implicit layer (using any black-box solver),
and differentiates through this fixed point analytically (thus requiring $O(1)$
training memory). This implicit-depth approach is not predicated on any
specific model, and thus can be applied to a wide range of SOTA flow estimation
model designs. The use of these DEQ flow estimators allows us to compute the
flow faster using, e.g., fixed-point reuse and inexact gradients, consumes
$4\sim6\times$ times less training memory than the recurrent counterpart, and
achieves better results with the same computation budget. In addition, we
propose a novel, sparse fixed-point correction scheme to stabilize our DEQ flow
estimators, which addresses a longstanding challenge for DEQ models in general.
We test our approach in various realistic settings and show that it improves
SOTA methods on Sintel and KITTI datasets with substantially better
computational and memory efficiency.
Related papers
- Adaptive operator learning for infinite-dimensional Bayesian inverse problems [7.716833952167609]
We develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas.
We present a rigorous convergence guarantee in the linear case using the UKI framework.
The numerical results show that our method can significantly reduce computational costs while maintaining inversion accuracy.
arXiv Detail & Related papers (2023-10-27T01:50:33Z) - Training Energy-Based Normalizing Flow with Score-Matching Objectives [36.0810550035231]
We present a new flow-based modeling approach called energy-based normalizing flow (EBFlow)
We demonstrate that by optimizing EBFlow with score-matching objectives, the computation of Jacobian determinants for linear transformations can be entirely bypassed.
arXiv Detail & Related papers (2023-05-24T15:54:29Z) - On an Edge-Preserving Variational Model for Optical Flow Estimation [0.0]
We propose an edge-preserving $L1$ regularization approach to optical flow estimation.
The proposed method achieves the best average angular and end-point errors compared to some of the state-of-the-art Horn and Schunck based variational methods.
arXiv Detail & Related papers (2022-07-21T04:46:16Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.