Embedding Empirical Distributions for Computing Optimal Transport Maps
- URL: http://arxiv.org/abs/2504.17740v1
- Date: Thu, 24 Apr 2025 16:52:48 GMT
- Title: Embedding Empirical Distributions for Computing Optimal Transport Maps
- Authors: Mingchen Jiang, Peng Xu, Xichen Ye, Xiaohui Chen, Yun Yang, Yifan Chen,
- Abstract summary: We introduce a novel approach to learning transport maps for new empirical distributions.<n>We employ the transformer architecture to produce embeddings from distributional data of varying length.<n>These embeddings are then fed into a hypernetwork to generate neural OT maps.
- Score: 20.78001177211786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributional data have become increasingly prominent in modern signal processing, highlighting the necessity of computing optimal transport (OT) maps across multiple probability distributions. Nevertheless, recent studies on neural OT methods predominantly focused on the efficient computation of a single map between two distributions. To address this challenge, we introduce a novel approach to learning transport maps for new empirical distributions. Specifically, we employ the transformer architecture to produce embeddings from distributional data of varying length; these embeddings are then fed into a hypernetwork to generate neural OT maps. Various numerical experiments were conducted to validate the embeddings and the generated OT maps. The model implementation and the code are provided on https://github.com/jiangmingchen/HOTET.
Related papers
- Overcoming Fake Solutions in Semi-Dual Neural Optimal Transport: A Smoothing Approach for Learning the Optimal Transport Plan [5.374547520354591]
Semi-dual Neural OT, a widely used approach for learning OT Maps with neural networks, often generates fake solutions that fail to transfer one distribution to another accurately.<n>We propose a novel method, OTP, which learns both the OT Map and the Optimal Transport Plan, representing the optimal coupling between two distributions.<n>Our experiments show that the OTP model recovers the optimal transport map where existing methods fail and outperforms current OT-based models in image-to-image translation tasks.
arXiv Detail & Related papers (2025-02-07T00:37:12Z) - Improving Neural Optimal Transport via Displacement Interpolation [16.474572112062535]
Optimal Transport (OT) theory investigates the cost-minimizing transport map that moves a source distribution to a target distribution.<n>We propose a novel method to improve stability and achieve a better approximation of the OT Map by exploiting displacement.<n>We demonstrate that DIOTM outperforms existing OT-based models on image-to-image translation tasks.
arXiv Detail & Related papers (2024-10-03T16:42:23Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Analyzing and Improving Optimal-Transport-based Adversarial Networks [9.980822222343921]
Optimal Transport (OT) problem aims to find a transport plan that bridges two distributions while minimizing a given cost function.
OT theory has been widely utilized in generative modeling.
Our approach achieves a FID score of 2.51 on CIFAR-10 and 5.99 on CelebA-HQ-256, outperforming unified OT-based adversarial approaches.
arXiv Detail & Related papers (2023-10-04T06:52:03Z) - Distribution Shift Inversion for Out-of-Distribution Prediction [57.22301285120695]
We propose a portable Distribution Shift Inversion algorithm for Out-of-Distribution (OoD) prediction.
We show that our method provides a general performance gain when plugged into a wide range of commonly used OoD algorithms.
arXiv Detail & Related papers (2023-06-14T08:00:49Z) - Scalable Computation of Monge Maps with General Costs [12.273462158073302]
Monge map refers to the optimal transport map between two probability distributions.
We present a scalable algorithm for computing the Monge map between two probability distributions.
arXiv Detail & Related papers (2021-06-07T17:23:24Z) - WILDS: A Benchmark of in-the-Wild Distribution Shifts [157.53410583509924]
Distribution shifts can substantially degrade the accuracy of machine learning systems deployed in the wild.
We present WILDS, a curated collection of 8 benchmark datasets that reflect a diverse range of distribution shifts.
We show that standard training results in substantially lower out-of-distribution than in-distribution performance.
arXiv Detail & Related papers (2020-12-14T11:14:56Z) - PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and
Localization [64.39761523935613]
We present a new framework for Patch Distribution Modeling, PaDiM, to concurrently detect and localize anomalies in images.
PaDiM makes use of a pretrained convolutional neural network (CNN) for patch embedding.
It also exploits correlations between the different semantic levels of CNN to better localize anomalies.
arXiv Detail & Related papers (2020-11-17T17:29:18Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z) - AE-OT-GAN: Training GANs from data specific latent distribution [21.48007565143911]
generative adversarial networks (GANs) areprominent models to generate realistic and crisp images.
GANs often encounter the mode collapse problems and arehard to train, which comes from approximating the intrinsicdiscontinuous distribution transform map with continuousDNNs.
The recently proposed AE-OT model addresses thisproblem by explicitly computing the discontinuous distribu-tion transform map.
In this paper, wepropose the AE-OT-GAN model to utilize the advantages ofthe both models: generate high quality images and at the same time overcome the mode collapse/mixture problems.
arXiv Detail & Related papers (2020-01-11T01:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.