On an Edge-Preserving Variational Model for Optical Flow Estimation
- URL: http://arxiv.org/abs/2207.10302v1
- Date: Thu, 21 Jul 2022 04:46:16 GMT
- Title: On an Edge-Preserving Variational Model for Optical Flow Estimation
- Authors: Hirak Doshi, N. Uday Kiran
- Abstract summary: We propose an edge-preserving $L1$ regularization approach to optical flow estimation.
The proposed method achieves the best average angular and end-point errors compared to some of the state-of-the-art Horn and Schunck based variational methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is well known that classical formulations resembling the Horn and Schunck
model are still largely competitive due to the modern implementation practices.
In most cases, these models outperform many modern flow estimation methods. In
view of this, we propose an effective implementation design for an
edge-preserving $L^1$ regularization approach to optical flow. The mathematical
well-posedness of our proposed model is studied in the space of functions of
bounded variations $BV(\Omega,\mathbb{R}^2)$. The implementation scheme is
designed in multiple steps. The flow field is computed using the robust
Chambolle-Pock primal-dual algorithm. Motivated by the recent studies of Castro
and Donoho we extend the heuristic of iterated median filtering to our flow
estimation. Further, to refine the flow edges we use the weighted median filter
established by Li and Osher as a post-processing step. Our experiments on the
Middlebury dataset show that the proposed method achieves the best average
angular and end-point errors compared to some of the state-of-the-art Horn and
Schunck based variational methods.
Related papers
- Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - Flow Map Matching [15.520853806024943]
Flow map matching is an algorithm that learns the two-time flow map of an underlying ordinary differential equation.
We show that flow map matching leads to high-quality samples with significantly reduced sampling cost compared to diffusion or interpolant methods.
arXiv Detail & Related papers (2024-06-11T17:41:26Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Geometry-Aware Normalizing Wasserstein Flows for Optimal Causal
Inference [0.0]
This paper presents a groundbreaking approach to causal inference by integrating continuous normalizing flows with parametric submodels.
We leverage optimal transport and Wasserstein gradient flows to develop causal inference methodologies with minimal variance in finite-sample settings.
Preliminary experiments showcase our method's superiority, yielding lower mean-squared errors compared to standard flows.
arXiv Detail & Related papers (2023-11-30T18:59:05Z) - Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo [104.9535542833054]
We present a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning (RL)
We instead directly sample the Q function from its posterior distribution, by using Langevin Monte Carlo.
Our approach achieves better or similar results compared with state-of-the-art deep RL algorithms on several challenging exploration tasks from the Atari57 suite.
arXiv Detail & Related papers (2023-05-29T17:11:28Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Bayesian Sequential Optimal Experimental Design for Nonlinear Models
Using Policy Gradient Reinforcement Learning [0.0]
We formulate this sequential optimal experimental design (sOED) problem as a finite-horizon partially observable Markov decision process (POMDP)
It is built to accommodate continuous random variables, general non-Gaussian posteriors, and expensive nonlinear forward models.
We solve for the sOED policy numerically via policy gradient (PG) methods from reinforcement learning, and derive and prove the PG expression for sOED.
The overall PG-sOED method is validated on a linear-Gaussian benchmark, and its advantages over batch and greedy designs are demonstrated through a contaminant source inversion problem in a
arXiv Detail & Related papers (2021-10-28T17:47:31Z) - Recent advances in Bayesian optimization with applications to parameter
reconstruction in optical nano-metrology [0.0]
reconstruction is a common problem in optical nano metrology.
We present a Bayesian Target Vector Optimization scheme which combines two approaches.
We find that the presented method generally uses fewer calls of the model function than any of the competing schemes to achieve similar reconstruction performance.
arXiv Detail & Related papers (2021-07-12T15:32:15Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.