LCOT: Linear circular optimal transport
- URL: http://arxiv.org/abs/2310.06002v1
- Date: Mon, 9 Oct 2023 14:37:56 GMT
- Title: LCOT: Linear circular optimal transport
- Authors: Rocio Diaz Martin, Ivan Medri, Yikun Bai, Xinran Liu, Kangbai Yan,
Gustavo K. Rohde, Soheil Kolouri
- Abstract summary: We introduce a new computationally efficient metric for circular probability measures, denoted as Linear Circular Optimal Transport (LCOT)
The proposed metric comes with an explicit linear embedding that allows one to apply Machine Learning (ML) algorithms to the embedded measures.
We show that the proposed metric is rooted in the Circular Optimal Transport (COT) and can be considered the linearization of the COT metric with respect to a fixed reference measure.
- Score: 12.500693755673796
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The optimal transport problem for measures supported on non-Euclidean spaces
has recently gained ample interest in diverse applications involving
representation learning. In this paper, we focus on circular probability
measures, i.e., probability measures supported on the unit circle, and
introduce a new computationally efficient metric for these measures, denoted as
Linear Circular Optimal Transport (LCOT). The proposed metric comes with an
explicit linear embedding that allows one to apply Machine Learning (ML)
algorithms to the embedded measures and seamlessly modify the underlying metric
for the ML algorithm to LCOT. We show that the proposed metric is rooted in the
Circular Optimal Transport (COT) and can be considered the linearization of the
COT metric with respect to a fixed reference measure. We provide a theoretical
analysis of the proposed metric and derive the computational complexities for
pairwise comparison of circular probability measures. Lastly, through a set of
numerical experiments, we demonstrate the benefits of LCOT in learning
representations of circular measures.
Related papers
- Linear Spherical Sliced Optimal Transport: A Fast Metric for Comparing Spherical Data [11.432994366694862]
linear optimal transport has been proposed to embed distributions into ( L2 ) spaces, where the ( L2 ) distance approximates the optimal transport distance.
We introduce the Linear Spherical Sliced Optimal Transport framework, which embeds spherical distributions into ( L2 ) spaces while preserving their intrinsic geometry.
We demonstrate its superior computational efficiency in applications such as cortical surface registration, 3D point cloud via flow gradient, and shape embedding.
arXiv Detail & Related papers (2024-11-09T03:36:59Z) - A Riemannian Approach to Ground Metric Learning for Optimal Transport [31.333036109340835]
We learn a suitable latent ground metric parameterized by a symmetric positive definite matrix.
Empirical results illustrate the efficacy of the learned metric in OT-based domain adaptation.
arXiv Detail & Related papers (2024-09-16T08:42:56Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Energy-Guided Continuous Entropic Barycenter Estimation for General Costs [95.33926437521046]
We propose a novel algorithm for approximating the continuous Entropic OT (EOT) barycenter for arbitrary OT cost functions.
Our approach is built upon the dual reformulation of the EOT problem based on weak OT.
arXiv Detail & Related papers (2023-10-02T11:24:36Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Entropic Neural Optimal Transport via Diffusion Processes [105.34822201378763]
We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between continuous probability distributions.
Our algorithm is based on the saddle point reformulation of the dynamic version of EOT which is known as the Schr"odinger Bridge problem.
In contrast to the prior methods for large-scale EOT, our algorithm is end-to-end and consists of a single learning step.
arXiv Detail & Related papers (2022-11-02T14:35:13Z) - Riemannian Metric Learning via Optimal Transport [34.557360177483595]
We introduce an optimal transport-based model for learning a metric from cross-sectional samples of evolving probability measures.
We show that metrics learned using our method improve the quality of trajectory inference on scRNA and bird migration data.
arXiv Detail & Related papers (2022-05-18T23:32:20Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Improving Approximate Optimal Transport Distances using Quantization [23.319746583489763]
Optimal transport is a popular tool in machine learning to compare probability measures geometrically.
Linear programming algorithms for computing OT scale cubically in the size of the input, making OT impractical in the large-sample regime.
We introduce a practical algorithm, which relies on a quantization step, to estimate OT distances between measures given cheap sample access.
arXiv Detail & Related papers (2021-02-25T08:45:06Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - Efficient Robust Optimal Transport with Application to Multi-Label
Classification [12.521494095948068]
We model the feature-feature relationship via a symmetric positive semi-definite Mahalanobis metric in the OT cost function.
We view the resulting optimization problem as a non-linear OT problem, which we solve using the Frank-Wolfe algorithm.
Empirical results on the discriminative learning setting, such as tag prediction and multi-class classification, illustrate the good performance of our approach.
arXiv Detail & Related papers (2020-10-22T16:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.