Vector-Matrix-Vector Queries for Solving Linear Algebra, Statistics, and
Graph Problems
- URL: http://arxiv.org/abs/2006.14015v1
- Date: Wed, 24 Jun 2020 19:33:49 GMT
- Title: Vector-Matrix-Vector Queries for Solving Linear Algebra, Statistics, and
Graph Problems
- Authors: Cyrus Rashtchian, David P. Woodruff and Hanlin Zhu
- Abstract summary: We consider the general problem of learning about a matrix through vector-matrix-vector queries.
These queries provide the value of $boldsymbolumathrmTboldsymbolMboldsymbolv$ over a fixed field.
We provide new upper and lower bounds for a wide variety of problems, spanning linear algebra, statistics, and graphs.
- Score: 58.83118651518438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the general problem of learning about a matrix through
vector-matrix-vector queries. These queries provide the value of
$\boldsymbol{u}^{\mathrm{T}}\boldsymbol{M}\boldsymbol{v}$ over a fixed field
$\mathbb{F}$ for a specified pair of vectors $\boldsymbol{u},\boldsymbol{v} \in
\mathbb{F}^n$. To motivate these queries, we observe that they generalize many
previously studied models, such as independent set queries, cut queries, and
standard graph queries. They also specialize the recently studied matrix-vector
query model. Our work is exploratory and broad, and we provide new upper and
lower bounds for a wide variety of problems, spanning linear algebra,
statistics, and graphs. Many of our results are nearly tight, and we use
diverse techniques from linear algebra, randomized algorithms, and
communication complexity.
Related papers
- One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - On the well-spread property and its relation to linear regression [4.619541348328937]
We show that consistent recovery of the parameter vector in a robust linear regression model is information-theoretically impossible.
We show that it is possible to efficiently certify whether a given $n$-by-$d$ matrix is well-spread if the number of observations is quadratic in the ambient dimension.
arXiv Detail & Related papers (2022-06-16T11:17:44Z) - Sign and Basis Invariant Networks for Spectral Graph Representation
Learning [75.18802152811539]
We introduce SignNet and BasisNet -- new neural architectures that are invariant to all requisite symmetries and hence process collections of eigenspaces in a principled manner.
Our networks are theoretically strong for graph representation learning -- they can approximate any spectral graph convolution.
Experiments show the strength of our networks for learning spectral graph filters and learning graph positional encodings.
arXiv Detail & Related papers (2022-02-25T23:11:59Z) - Improved analysis of randomized SVD for top-eigenvector approximation [16.905540623795467]
We present a novel analysis of the randomized SVD algorithm of citethalko2011finding and derive tight bounds in many cases of interest.
Notably, this is the first work that provides non-trivial bounds of $R(hatmathbfu)$ for randomized SVD with any number of iterations.
arXiv Detail & Related papers (2022-02-16T11:12:07Z) - On the query complexity of connectivity with global queries [0.2538209532048867]
We study the query complexity of determining if a graph is connected with global queries.
We show that a zero-error randomized algorithm must make $Omega(n)$ linear queries in expectation to solve connectivity.
arXiv Detail & Related papers (2021-09-05T16:22:55Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Learning Sparse Graph Laplacian with K Eigenvector Prior via Iterative
GLASSO and Projection [58.5350491065936]
We consider a structural assumption on the graph Laplacian matrix $L$.
The first $K$ eigenvectors of $L$ are pre-selected, e.g., based on domain-specific criteria.
We design an efficient hybrid graphical lasso/projection algorithm to compute the most suitable graph Laplacian matrix $L* in H_u+$ given $barC$.
arXiv Detail & Related papers (2020-10-25T18:12:50Z) - Recovery of sparse linear classifiers from mixture of responses [42.228977798395846]
We show how to learn a collection of hyperplanes from a sequence of binary responses.
Each response is a result of querying with a vector and indicates the side of a randomly chosen hyperplane from the collection the query vector belongs to.
arXiv Detail & Related papers (2020-10-22T22:03:53Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.