Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson
Brownian Motion
- URL: http://arxiv.org/abs/2211.06418v1
- Date: Fri, 11 Nov 2022 18:54:01 GMT
- Title: Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson
Brownian Motion
- Authors: Oren Mangoubi and Nisheeth K. Vishnoi
- Abstract summary: Given a symmetric matrix $M$ and a vector $lambda$, we present new bounds on the Frobenius-distance utility of the Gaussian mechanism for approximating $M$ by a matrix.
Our bounds depend on both $lambda$ and the gaps in the eigenvalues of $M$, and hold whenever the top $k+1$ eigenvalues of $M$ have sufficiently large gaps.
- Score: 28.431572772564518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a symmetric matrix $M$ and a vector $\lambda$, we present new bounds on
the Frobenius-distance utility of the Gaussian mechanism for approximating $M$
by a matrix whose spectrum is $\lambda$, under
$(\varepsilon,\delta)$-differential privacy. Our bounds depend on both
$\lambda$ and the gaps in the eigenvalues of $M$, and hold whenever the top
$k+1$ eigenvalues of $M$ have sufficiently large gaps. When applied to the
problems of private rank-$k$ covariance matrix approximation and subspace
recovery, our bounds yield improvements over previous bounds. Our bounds are
obtained by viewing the addition of Gaussian noise as a continuous-time matrix
Brownian motion. This viewpoint allows us to track the evolution of eigenvalues
and eigenvectors of the matrix, which are governed by stochastic differential
equations discovered by Dyson. These equations allow us to bound the utility as
the square-root of a sum-of-squares of perturbations to the eigenvectors, as
opposed to a sum of perturbation bounds obtained via Davis-Kahan-type theorems.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - The matrix permanent and determinant from a spin system [0.0]
Gauss-Jordan for the determinant of $M$ is then equivalent to the successive removal of the generalized zero eigenspace of the fermionic $breveM$.
Our analysis may point the way to new strategies for classical and quantum evaluation of the permanent.
arXiv Detail & Related papers (2023-07-10T16:34:55Z) - Private Covariance Approximation and Eigenvalue-Gap Bounds for Complex
Gaussian Perturbations [28.431572772564518]
We show that the Frobenius norm of the difference between the matrix output by this mechanism and the best rank-$k$ approximation to $M$ is bounded by roughly $tildeO(sqrtkd)$.
This improves on previous work that requires that the gap between every pair of top-$k$ eigenvalues of $M$ is at least $sqrtd$ for a similar bound.
arXiv Detail & Related papers (2023-06-29T03:18:53Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Near-optimal fitting of ellipsoids to random points [68.12685213894112]
A basic problem of fitting an ellipsoid to random points has connections to low-rank matrix decompositions, independent component analysis, and principal component analysis.
We resolve this conjecture up to logarithmic factors by constructing a fitting ellipsoid for some $n = Omega(, d2/mathrmpolylog(d),)$.
Our proof demonstrates feasibility of the least squares construction of Saunderson et al. using a convenient decomposition of a certain non-standard random matrix.
arXiv Detail & Related papers (2022-08-19T18:00:34Z) - Private Matrix Approximation and Geometry of Unitary Orbits [29.072423395363668]
This problem seeks to approximate $A$ by a matrix whose spectrum is the same as $Lambda$.
We give efficient and private algorithms that come with upper and lower bounds on the approximation error.
arXiv Detail & Related papers (2022-07-06T16:31:44Z) - An Equivalence Principle for the Spectrum of Random Inner-Product Kernel
Matrices with Polynomial Scalings [21.727073594338297]
This study is motivated by applications in machine learning and statistics.
We establish the weak limit of the empirical distribution of these random matrices in a scaling regime.
Our results can be characterized as the free additive convolution between a Marchenko-Pastur law and a semicircle law.
arXiv Detail & Related papers (2022-05-12T18:50:21Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Asymptotic errors for convex penalized linear regression beyond Gaussian
matrices [23.15629681360836]
We consider the problem of learning a coefficient vector $x_0$ in $RN$ from noisy linear observations.
We provide a rigorous derivation of an explicit formula for the mean squared error.
We show that our predictions agree remarkably well with numerics even for very moderate sizes.
arXiv Detail & Related papers (2020-02-11T13:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.