Convergence rate of Tsallis entropic regularized optimal transport
- URL: http://arxiv.org/abs/2304.06616v1
- Date: Thu, 13 Apr 2023 15:37:14 GMT
- Title: Convergence rate of Tsallis entropic regularized optimal transport
- Authors: Takeshi Suguro and Toshiaki Yachimura
- Abstract summary: We consider Tsallis entropic regularized optimal transport and discuss the convergence rate as the regularization parameter $varepsilon$ goes to $0$.
We compare this to the convergence rate of the entropic regularized optimal transport with Kullback--Leibler divergence and show that KL is the fastest convergence rate in terms of Tsallis relative entropy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider Tsallis entropic regularized optimal transport and
discuss the convergence rate as the regularization parameter $\varepsilon$ goes
to $0$. In particular, we establish the convergence rate of the Tsallis
entropic regularized optimal transport using the quantization and shadow
arguments developed by Eckstein--Nutz. We compare this to the convergence rate
of the entropic regularized optimal transport with Kullback--Leibler (KL)
divergence and show that KL is the fastest convergence rate in terms of Tsallis
relative entropy.
Related papers
- Controlling the Flow: Stability and Convergence for Stochastic Gradient Descent with Decaying Regularization [0.40964539027092917]
We prove strong convergence of reg-SGD to the minimum-norm solution of the original problem without additional boundedness assumptions.<n>Our analysis reveals how vanishing Tikhonov regularization controls the flow of SGD and yields stable learning dynamics.
arXiv Detail & Related papers (2025-05-16T16:53:49Z) - Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation [51.248784084461334]
We prove new convergence rates for a generalized version of Nesterov acceleration underrho conditions.
Our analysis reduces the dependence on the strong growth constant from $$ to $sqrt$ as compared to prior work.
arXiv Detail & Related papers (2024-04-03T00:41:19Z) - Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation [12.455342327482223]
We present a generalised formulation of convergent regularisation in terms of critical points.
We show that this is achieved by a class of weakly convex regularisers.
Applying this theory to learned regularisation, we prove universal approximation for input weakly convex neural networks.
arXiv Detail & Related papers (2024-02-01T22:54:45Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Convergence Rates for Regularized Optimal Transport via Quantization [3.04585143845864]
We study the convergence of divergence-regularized optimal transport as the regularization parameter vanishes.
A novel methodology using quantization and martingale couplings is suitable for non-compact marginals.
arXiv Detail & Related papers (2022-08-30T16:58:40Z) - Nearly Tight Convergence Bounds for Semi-discrete Entropic Optimal
Transport [0.483420384410068]
We derive nearly tight and non-asymptotic convergence bounds for solutions of entropic semi-discrete optimal transport.
Our results also entail a non-asymptotic and tight expansion of the difference between the entropic and the unregularized costs.
arXiv Detail & Related papers (2021-10-25T06:52:45Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Beyond Tikhonov: Faster Learning with Self-Concordant Losses via
Iterative Regularization [120.31448970413298]
We extend the theory of Tikhonov regularization to generalized self concordant loss functions.
We show that fast and optimal rates can be achieved for GSC by using the iterated Tikhonov regularization scheme.
arXiv Detail & Related papers (2021-06-16T15:25:41Z) - The Convergence Indicator: Improved and completely characterized
parameter bounds for actual convergence of Particle Swarm Optimization [68.8204255655161]
We introduce a new convergence indicator that can be used to calculate whether the particles will finally converge to a single point or diverge.
Using this convergence indicator we provide the actual bounds completely characterizing parameter regions that lead to a converging swarm.
arXiv Detail & Related papers (2020-06-06T19:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.