Additive estimates of the permanent using Gaussian fields
- URL: http://arxiv.org/abs/2212.10672v2
- Date: Tue, 13 Feb 2024 19:28:13 GMT
- Title: Additive estimates of the permanent using Gaussian fields
- Authors: Tantrik Mukerji and Wei-Shih Yang
- Abstract summary: We present a randomized algorithm for estimating the permanent of an $M times M$ real matrix $A$ up to an additive error.
We can estimate $mathrmperm(A)$ to an additive error of $epsilonbigg(sqrt32Mprod2M_i=1 C_iibigg)$ in time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a randomized algorithm for estimating the permanent of an $M
\times M$ real matrix $A$ up to an additive error. We do this by viewing the
permanent $\mathrm{perm}(A)$ of $A$ as the expectation of a product of centered
joint Gaussian random variables with a particular covariance matrix $C$. The
algorithm outputs the empirical mean $S_{N}$ of this product after sampling $N$
times. Our algorithm runs in total time $O(M^{3} + M^{2}N + MN)$ with failure
probability \begin{equation*}
P(|S_{N}-\text{perm}(A)| > t) \leq \frac{3^{M}}{t^{2}N} \prod^{2M}_{i=1}
C_{ii}. \end{equation*} In particular, we can estimate $\mathrm{perm}(A)$ to an
additive error of $\epsilon\bigg(\sqrt{3^{2M}\prod^{2M}_{i=1} C_{ii}}\bigg)$ in
polynomial time. We compare to a previous procedure due to Gurvits. We discuss
how to find a particular $C$ using a semidefinite program and a relation to the
Max-Cut problem and cut-norms.
Related papers
- Sample-Efficient Linear Regression with Self-Selection Bias [7.605563562103568]
We consider the problem of linear regression with self-selection bias in the unknown-index setting.
We provide a novel and near optimally sample-efficient (in terms of $k$) algorithm to recover $mathbfw_1,ldots,mathbfw_kin.
Our algorithm succeeds under significantly relaxed noise assumptions, and therefore also succeeds in the related setting of max-linear regression.
arXiv Detail & Related papers (2024-02-22T02:20:24Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Matrix Completion in Almost-Verification Time [37.61139884826181]
We provide an algorithm which completes $mathbfM$ on $99%$ of rows and columns.
We show how to boost this partial completion guarantee to a full matrix completion algorithm.
arXiv Detail & Related papers (2023-08-07T15:24:49Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Classical shadows of fermions with particle number symmetry [0.0]
We provide an estimator for any $k$-RDM with $mathcalO(k2eta)$ classical complexity.
Our method, in the worst-case of half-filling, still provides a factor of $4k$ advantage in sample complexity.
arXiv Detail & Related papers (2022-08-18T17:11:12Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Computational Complexity of Normalizing Constants for the Product of
Determinantal Point Processes [12.640283469603357]
We study the computational complexity of computing the normalizing constant.
We show that $sum_Sdet(bf A_S,S)p$ exactly for every (fixed) positive even integer $p$ is UP-hard and Mod$_3$P-hard.
arXiv Detail & Related papers (2021-11-28T14:08:25Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Locally Private $k$-Means Clustering with Constant Multiplicative
Approximation and Near-Optimal Additive Error [10.632986841188]
We bridge the gap between the exponents of $n$ in the upper and lower bounds on the additive error with two new algorithms.
It is possible to solve the locally private $k$-means problem in a constant number of rounds with constant factor multiplicative approximation.
arXiv Detail & Related papers (2021-05-31T14:41:40Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.