Matrix majorization in large samples
- URL: http://arxiv.org/abs/2301.07353v2
- Date: Mon, 8 Jan 2024 08:55:33 GMT
- Title: Matrix majorization in large samples
- Authors: Muhammad Usman Farooq and Tobias Fritz and Erkka Haapasalo and Marco
Tomamichel
- Abstract summary: We show that if certain monotones are strictly ordered between the twos, then there exists a matrix taking the $n$fold Kronecker power of each input distribution to the $n$fold power of the output distribution.
We find conditions that are both necessary and sufficient for such catalytic matrix majorization.
- Score: 9.421843976231372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One tuple of probability vectors is more informative than another tuple when
there exists a single stochastic matrix transforming the probability vectors of
the first tuple into the probability vectors of the other. This is called
matrix majorization. Solving an open problem raised by Mu et al, we show that
if certain monotones - namely multivariate extensions of R\'{e}nyi divergences
- are strictly ordered between the two tuples, then for sufficiently large $n$,
there exists a stochastic matrix taking the $n$-fold Kronecker power of each
input distribution to the $n$-fold Kronecker power of the corresponding output
distribution. The same conditions, with non-strict ordering for the monotones,
are also necessary for such matrix majorization in large samples.
Our result also gives conditions for the existence of a sequence of
statistical maps that asymptotically (with vanishing error) convert a single
copy of each input distribution to the corresponding output distribution with
the help of a catalyst that is returned unchanged. Allowing for transformation
with arbitrarily small error, we find conditions that are both necessary and
sufficient for such catalytic matrix majorization.
We derive our results by building on a general algebraic theory of preordered
semirings recently developed by one of the authors. This also allows us to
recover various existing results on majorization in large samples and in the
catalytic regime as well as relative majorization in a unified manner.
Related papers
- Matrix majorization in large samples with varying support restrictions [7.988085110283119]
We study matrix majorization in large samples and in the catalytic regime.
We find an application in the theory of catalytic state transformation in quantum thermodynamics.
arXiv Detail & Related papers (2024-07-23T15:37:17Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Private Covariance Approximation and Eigenvalue-Gap Bounds for Complex
Gaussian Perturbations [28.431572772564518]
We show that the Frobenius norm of the difference between the matrix output by this mechanism and the best rank-$k$ approximation to $M$ is bounded by roughly $tildeO(sqrtkd)$.
This improves on previous work that requires that the gap between every pair of top-$k$ eigenvalues of $M$ is at least $sqrtd$ for a similar bound.
arXiv Detail & Related papers (2023-06-29T03:18:53Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Near optimal sample complexity for matrix and tensor normal models via
geodesic convexity [5.191641077435773]
We show nonasymptotic bounds for the error achieved by the maximum likelihood estimator (MLE) in several natural metrics.
In the same regimes as our sample complexity bounds, we show that an iterative procedure to compute the MLE known as the flip-flop algorithm converges linearly with high probability.
arXiv Detail & Related papers (2021-10-14T17:47:00Z) - Convergence Rates of Stochastic Gradient Descent under Infinite Noise
Variance [14.06947898164194]
Heavy tails emerge in gradient descent (SGD) in various scenarios.
We provide convergence guarantees for SGD under a state-dependent and heavy-tailed noise with a potentially infinite variance.
Our results indicate that even under heavy-tailed noise with infinite variance, SGD can converge to the global optimum.
arXiv Detail & Related papers (2021-02-20T13:45:11Z) - Leveraged Matrix Completion with Noise [84.20092979053119]
We show that we can provably recover an unknown $ntimes n$ matrix of rank $r$ from just about $mathcalO(nrlog2 (n))$ entries.
Our proofs are supported by a novel approach that phrases sufficient optimality conditions based on the Golfing Scheme.
arXiv Detail & Related papers (2020-11-11T16:25:45Z) - Generic aspects of the resource theory of quantum coherence [0.0]
We prove that if two $n$-dimensional pure states are chosen independently according to the natural uniform distribution, then the probability that they are comparable as $nrightarrowinfty$.
We also study the maximal success probability of incoherent conversions and find an explicit formula for its large-$n$ distribution.
arXiv Detail & Related papers (2020-10-13T16:38:52Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.