Phase transition of the Sinkhorn-Knopp algorithm
- URL: http://arxiv.org/abs/2507.09711v1
- Date: Sun, 13 Jul 2025 17:07:51 GMT
- Title: Phase transition of the Sinkhorn-Knopp algorithm
- Authors: Kun He,
- Abstract summary: We show that the Sinkhorn-Knopp algorithm produces a nearly doubly matrix in $O(log n - log varepsilon)$ and $widetildeO(n2)$ time.<n>For every $gamma 1/2$, there exists a matrix with density $gamma for which the algorithm requires $Omegaleft(n1/2/varepsilonright)$ iterations.
- Score: 8.271753306351684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The matrix scaling problem, particularly the Sinkhorn-Knopp algorithm, has been studied for over 60 years. In practice, the algorithm often yields high-quality approximations within just a few iterations. Theoretically, however, the best-known upper bound places it in the class of pseudopolynomial-time approximation algorithms. Meanwhile, the lower-bound landscape remains largely unexplored. Two fundamental questions persist: what accounts for the algorithm's strong empirical performance, and can a tight bound on its iteration count be established? For an $n\times n$ matrix, its normalized version is obtained by dividing each entry by its largest entry. We say that a normalized matrix has a density $\gamma$ if there exists a constant $\rho > 0$ such that one row or column has exactly $\lceil \gamma n \rceil$ entries with values at least $\rho$, and every other row and column has at least $\lceil \gamma n \rceil$ such entries. For the upper bound, we show that the Sinkhorn-Knopp algorithm produces a nearly doubly stochastic matrix in $O(\log n - \log \varepsilon)$ iterations and $\widetilde{O}(n^2)$ time for all nonnegative square matrices whose normalized version has a density $\gamma > 1/2$. Such matrices cover both the algorithm's principal practical inputs and its typical theoretical regime, and the $\widetilde{O}(n^2)$ runtime is optimal. For the lower bound, we establish a tight bound of $\widetilde{\Omega}\left(n^{1/2}/\varepsilon\right)$ iterations for positive matrices under the $\ell_2$-norm error measure. Moreover, for every $\gamma < 1/2$, there exists a matrix with density $\gamma$ for which the algorithm requires $\Omega\left(n^{1/2}/\varepsilon\right)$ iterations. In summary, our results reveal a sharp phase transition in the Sinkhorn-Knopp algorithm at the density threshold $\gamma = 1/2$.
Related papers
- Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Solving Dense Linear Systems Faster Than via Preconditioning [1.8854491183340518]
We show that our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
In particular, our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
Our main algorithm can be viewed as a randomized block coordinate descent method.
arXiv Detail & Related papers (2023-12-14T12:53:34Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Krylov Methods are (nearly) Optimal for Low-Rank Approximation [8.017116107657206]
We show that any algorithm requires $Omegaleft(log(n)/varepsilon1/2right)$ matrix-vector products, exactly matching the upper bound obtained by Krylov methods.
Our lower bound addresses Open Question 1WooWoo14, providing evidence for the lack of progress on algorithms for Spectral LRA.
arXiv Detail & Related papers (2023-04-06T16:15:19Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Improved quantum lower and upper bounds for matrix scaling [0.0]
We investigate the possibilities for quantum speedups for classical second-order algorithms.
We show that there can be essentially no quantum speedup in terms of the input size in the high-precision regime.
We give improved quantum algorithms in the low-precision regime.
arXiv Detail & Related papers (2021-09-30T17:29:23Z) - Learning a Latent Simplex in Input-Sparsity Time [58.30321592603066]
We consider the problem of learning a latent $k$-vertex simplex $KsubsetmathbbRdtimes n$, given access to $AinmathbbRdtimes n$.
We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $A$.
arXiv Detail & Related papers (2021-05-17T16:40:48Z) - Quantum algorithms for matrix scaling and matrix balancing [9.010461408997646]
Matrix scaling and matrix balancing are two basic linear-algebraic problems with a wide variety of applications.
We study the power and limitations of quantum algorithms for these problems.
We provide quantum implementations of two classical methods: Sinkhorn's algorithm for matrix scaling and Osborne's algorithm for matrix balancing.
arXiv Detail & Related papers (2020-11-25T15:26:59Z) - Average Case Column Subset Selection for Entrywise $\ell_1$-Norm Loss [76.02734481158458]
It is known that in the worst case, to obtain a good rank-$k$ approximation to a matrix, one needs an arbitrarily large $nOmega(1)$ number of columns.
We show that under certain minimal and realistic distributional settings, it is possible to obtain a $(k/epsilon)$-approximation with a nearly linear running time and poly$(k/epsilon)+O(klog n)$ columns.
This is the first algorithm of any kind for achieving a $(k/epsilon)$-approximation for entrywise
arXiv Detail & Related papers (2020-04-16T22:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.