On the Connection between Dynamical Optimal Transport and Functional
Lifting
- URL: http://arxiv.org/abs/2007.02587v1
- Date: Mon, 6 Jul 2020 08:53:35 GMT
- Title: On the Connection between Dynamical Optimal Transport and Functional
Lifting
- Authors: Thomas Vogt, Roland Haase, Danielle Bednarski, Jan Lellmann
- Abstract summary: In this work, we investigate a mathematically rigorous formulation based on embedding into the space over a fixed range $Gamma$
By modifying the continuity equation, the approach can be extended to models with higher-order regularization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Functional lifting methods provide a tool for approximating solutions of
difficult non-convex problems by embedding them into a larger space. In this
work, we investigate a mathematically rigorous formulation based on embedding
into the space of pointwise probability measures over a fixed range $\Gamma$.
Interestingly, this approach can be derived as a generalization of the theory
of dynamical optimal transport. Imposing the established continuity equation as
a constraint corresponds to variational models with first-order regularization.
By modifying the continuity equation, the approach can also be extended to
models with higher-order regularization.
Related papers
- Efficient Optimization with Orthogonality Constraint: a Randomized Riemannian Submanifold Method [10.239769272138995]
We propose a novel approach to solve problems in machine learning.<n>We introduce two strategies for updating the random submanifold.<n>We show how our approach can be generalized to a wide variety of problems.
arXiv Detail & Related papers (2025-05-18T11:46:44Z) - Proximal optimal transport divergences [6.6875717609310765]
We introduce proximal optimal transport divergence, a novel discrepancy measure that interpolates between information divergences and optimal transport distances via an infimal convolution formulation.<n>We explore its mathematical properties, including smoothness, boundedness, and computational tractability, and establish connections to primal-dual formulation and adversarial learning.<n>Our framework generalizes existing approaches while offering new insights and computational tools for generative modeling, distributional optimization, and gradient-based learning in probability spaces.
arXiv Detail & Related papers (2025-05-17T17:48:11Z) - Entropic Mirror Descent for Linear Systems: Polyak's Stepsize and Implicit Bias [55.72269695392027]
This paper focuses on applying entropic mirror descent to solve linear systems.<n>The main challenge for the convergence analysis stems from the unboundedness of the domain.<n>To overcome this without imposing restrictive assumptions, we introduce a variant of Polyak-type stepsizes.
arXiv Detail & Related papers (2025-05-05T12:33:18Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - A Computational Framework for Solving Wasserstein Lagrangian Flows [48.87656245464521]
In general, the optimal density path is unknown, and solving these variational problems can be computationally challenging.
We propose a novel deep learning based framework approaching all of these problems from a unified perspective.
We showcase the versatility of the proposed framework by outperforming previous approaches for the single-cell trajectory inference.
arXiv Detail & Related papers (2023-10-16T17:59:54Z) - Optimal control of distributed ensembles with application to Bloch
equations [0.0]
We study an optimal ensemble control problem in a probabilistic setting with a general nonlinear performance criterion.
We derive an exact representation of the increment of the cost functional in terms of the flow of the driving vector field.
The numerical method is applied to solve new control problems for distributed ensembles of Bloch equations.
arXiv Detail & Related papers (2023-03-15T22:54:51Z) - Variational Monte Carlo Approach to Partial Differential Equations with
Neural Networks [0.0]
We develop a variational approach for solving partial differential equations governing the evolution of high dimensional probability distributions.
Our approach naturally works on the unbounded continuous domain and encodes the full probability density function through its variational parameters.
For the considered benchmark cases we observe excellent agreement with numerical solutions as well as analytical solutions in regimes inaccessible to traditional computational approaches.
arXiv Detail & Related papers (2022-06-04T07:36:35Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Stochastic spectral embedding [0.0]
We propose a novel sequential adaptive surrogate modeling method based on "stochastic spectral embedding" (SSE)
We show how the method compares favorably against state-of-the-art sparse chaos expansions on a set of models with different complexity and input dimension.
arXiv Detail & Related papers (2020-04-09T11:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.