A generalization of the randomized singular value decomposition
- URL: http://arxiv.org/abs/2105.13052v1
- Date: Thu, 27 May 2021 10:39:37 GMT
- Title: A generalization of the randomized singular value decomposition
- Authors: Nicolas Boull\'e, Alex Townsend
- Abstract summary: We generalize the theory of randomized SVD to multivariable Gaussian vectors, allowing one to incorporate prior knowledge of $A$ into the algorithm.
We construct a new covariance kernel for GPs, based on weighted Jacobi algorithms, which allows us to rapidly sample the GP and control the smoothness of the randomly generated functions.
- Score: 2.538209532048867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The randomized singular value decomposition (SVD) is a popular and effective
algorithm for computing a near-best rank $k$ approximation of a matrix $A$
using matrix-vector products with standard Gaussian vectors. Here, we
generalize the theory of randomized SVD to multivariable Gaussian vectors,
allowing one to incorporate prior knowledge of $A$ into the algorithm. This
enables us to explore the continuous analogue of the randomized SVD for
Hilbert--Schmidt (HS) operators using operator-function products with functions
drawn from a Gaussian process (GP). We then construct a new covariance kernel
for GPs, based on weighted Jacobi polynomials, which allows us to rapidly
sample the GP and control the smoothness of the randomly generated functions.
Numerical examples on matrices and HS operators demonstrate the applicability
of the algorithm.
Related papers
- Gaussian Processes Sampling with Sparse Grids under Additive Schwarz Preconditioner [6.408773096179187]
We propose a scalable algorithm for sampling random realizations of the prior and posterior of GP models.
The proposed algorithm leverages inducing points approximation with sparse grids, as well as additive Schwarz preconditioners.
arXiv Detail & Related papers (2024-08-01T00:19:36Z) - Low-complexity subspace-descent over symmetric positive definite
manifold [9.346050098365648]
We develop low-complexity algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold.
The proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix.
arXiv Detail & Related papers (2023-05-03T11:11:46Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Householder Dice: A Matrix-Free Algorithm for Simulating Dynamics on
Gaussian and Random Orthogonal Ensembles [12.005731086591139]
Householder Dice (HD) is an algorithm for simulating dynamics on dense random matrix ensembles with translation-invariant properties.
The memory and costs of the HD algorithm are $mathcalO(nT)$ and $mathcalO(nT2)$, respectively.
Numerical results demonstrate the promise of the HD algorithm as a new computational tool in the study of high-dimensional random systems.
arXiv Detail & Related papers (2021-01-19T04:50:53Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.