Convergence rate of Tsallis entropic regularized optimal transport
- URL: http://arxiv.org/abs/2304.06616v2
- Date: Mon, 07 Oct 2024 18:06:53 GMT
- Title: Convergence rate of Tsallis entropic regularized optimal transport
- Authors: Takeshi Suguro, Toshiaki Yachimura,
- Abstract summary: We establish fundamental results such as the $Gamma$-convergence of the Tsallis regularized optimal transport to the Monge--Kantorovich problem as the regularization parameter tends to zero.
We show that the KL regularization achieves the fastest convergence rate in the Tsallis framework.
- Score: 0.0
- License:
- Abstract: In this paper, we study the Tsallis entropic regularized optimal transport in the continuous setting and establish fundamental results such as the $\Gamma$-convergence of the Tsallis regularized optimal transport to the Monge--Kantorovich problem as the regularization parameter tends to zero. In addition, using the quantization and shadow arguments developed by Eckstein--Nutz, we derive the convergence rate of the Tsallis entropic regularization and provide explicit constants. Furthermore, we compare these results with the well-known case of the Kullback--Leibler (KL) divergence regularization and show that the KL regularization achieves the fastest convergence rate in the Tsallis framework.
Related papers
- Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation [51.248784084461334]
We prove new convergence rates for a generalized version of Nesterov acceleration underrho conditions.
Our analysis reduces the dependence on the strong growth constant from $$ to $sqrt$ as compared to prior work.
arXiv Detail & Related papers (2024-04-03T00:41:19Z) - Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation [12.455342327482223]
We present a generalised formulation of convergent regularisation in terms of critical points.
We show that this is achieved by a class of weakly convex regularisers.
Applying this theory to learned regularisation, we prove universal approximation for input weakly convex neural networks.
arXiv Detail & Related papers (2024-02-01T22:54:45Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Convergence Rates for Regularized Optimal Transport via Quantization [3.04585143845864]
We study the convergence of divergence-regularized optimal transport as the regularization parameter vanishes.
A novel methodology using quantization and martingale couplings is suitable for non-compact marginals.
arXiv Detail & Related papers (2022-08-30T16:58:40Z) - Nearly Tight Convergence Bounds for Semi-discrete Entropic Optimal
Transport [0.483420384410068]
We derive nearly tight and non-asymptotic convergence bounds for solutions of entropic semi-discrete optimal transport.
Our results also entail a non-asymptotic and tight expansion of the difference between the entropic and the unregularized costs.
arXiv Detail & Related papers (2021-10-25T06:52:45Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Beyond Tikhonov: Faster Learning with Self-Concordant Losses via
Iterative Regularization [120.31448970413298]
We extend the theory of Tikhonov regularization to generalized self concordant loss functions.
We show that fast and optimal rates can be achieved for GSC by using the iterated Tikhonov regularization scheme.
arXiv Detail & Related papers (2021-06-16T15:25:41Z) - The Convergence Indicator: Improved and completely characterized
parameter bounds for actual convergence of Particle Swarm Optimization [68.8204255655161]
We introduce a new convergence indicator that can be used to calculate whether the particles will finally converge to a single point or diverge.
Using this convergence indicator we provide the actual bounds completely characterizing parameter regions that lead to a converging swarm.
arXiv Detail & Related papers (2020-06-06T19:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.