Unbalanced Optimal Transport through Non-negative Penalized Linear
Regression
- URL: http://arxiv.org/abs/2106.04145v1
- Date: Tue, 8 Jun 2021 07:16:37 GMT
- Title: Unbalanced Optimal Transport through Non-negative Penalized Linear
Regression
- Authors: Laetitia Chapel, R\'emi Flamary, Haoran Wu, C\'edric F\'evotte and
Gilles Gasso
- Abstract summary: We show that the corresponding optimization problem can be reformulated as a non-negative penalized linear regression problem.
We propose novel algorithms inspired from inverse problems and nonnegative matrix factorization.
We derive for the first time an efficient algorithm to compute the regularization path of UOT with quadratic penalties.
- Score: 9.668391961887027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the problem of Unbalanced Optimal Transport (UOT) in
which the marginal conditions are relaxed (using weighted penalties in lieu of
equality) and no additional regularization is enforced on the OT plan. In this
context, we show that the corresponding optimization problem can be
reformulated as a non-negative penalized linear regression problem. This
reformulation allows us to propose novel algorithms inspired from inverse
problems and nonnegative matrix factorization. In particular, we consider
majorization-minimization which leads in our setting to efficient
multiplicative updates for a variety of penalties. Furthermore, we derive for
the first time an efficient algorithm to compute the regularization path of UOT
with quadratic penalties. The proposed algorithm provides a continuity of
piece-wise linear OT plans converging to the solution of balanced OT
(corresponding to infinite penalty weights). We perform several numerical
experiments on simulated and real data illustrating the new algorithms, and
provide a detailed discussion about more sophisticated optimization tools that
can further be used to solve OT problems thanks to our reformulation.
Related papers
- A Penalty-Based Guardrail Algorithm for Non-Decreasing Optimization with Inequality Constraints [1.5498250598583487]
Traditional mathematical programming solvers require long computational times to solve constrained minimization problems.
We propose a penalty-based guardrail algorithm (PGA) to efficiently solve them.
arXiv Detail & Related papers (2024-05-03T10:37:34Z) - Experimental demonstration of improved quantum optimization with linear Ising penalties [0.562479170374811]
We explore an alternative penalty method that only involves linear Ising terms and apply it to a customer data science problem.
Our findings support our hypothesis that the linear Ising penalty method should improve the performance of quantum optimization.
For problems with many constraints, where making all penalties linear is unlikely to be feasible, we investigate strategies for combining linear Ising penalties with quadratic penalties.
arXiv Detail & Related papers (2024-04-08T12:54:19Z) - Optimal Transport with Cyclic Symmetry [14.140178595218625]
We propose novel fast algorithms for optimal transport (OT) utilizing a cyclic symmetry structure of input data.
This paper successfully introduces the concept of symmetry into the OT research field for the first time.
arXiv Detail & Related papers (2023-11-22T04:18:23Z) - Unbalanced Optimal Transport meets Sliced-Wasserstein [11.44982599214965]
We propose two new loss functions based on the idea of slicing unbalanced OT, and study their induced topology and statistical properties.
We show that the resulting methodology is modular as it encompasses and extends prior related work.
arXiv Detail & Related papers (2023-06-12T15:15:00Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Offline Policy Optimization in RL with Variance Regularizaton [142.87345258222942]
We propose variance regularization for offline RL algorithms, using stationary distribution corrections.
We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer.
The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms.
arXiv Detail & Related papers (2022-12-29T18:25:01Z) - Proximal Point Imitation Learning [48.50107891696562]
We develop new algorithms with rigorous efficiency guarantees for infinite horizon imitation learning.
We leverage classical tools from optimization, in particular, the proximal-point method (PPM) and dual smoothing.
We achieve convincing empirical performance for both linear and neural network function approximation.
arXiv Detail & Related papers (2022-09-22T12:40:21Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Learning Cost Functions for Optimal Transport [44.64193016158591]
Inverse optimal transport (OT) refers to the problem of learning the cost function for OT from observed transport plan or its samples.
We derive an unconstrained convex optimization formulation of the inverse OT problem, which can be further augmented by any customizable regularization.
arXiv Detail & Related papers (2020-02-22T07:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.