The Structural Complexity of Matrix-Vector Multiplication
- URL: http://arxiv.org/abs/2502.21240v2
- Date: Thu, 06 Mar 2025 19:17:52 GMT
- Title: The Structural Complexity of Matrix-Vector Multiplication
- Authors: Emile Anand, Jan van den Brand, Rose McCarty,
- Abstract summary: We show that the matrix-vector multiplication problem can be solved with $tildeO(n2)$ preprocessing and $tilde O(n2-1/d)$ query time.<n>Our results yield the first non-trivial upper bounds for many applications.
- Score: 0.10485739694839669
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of preprocessing an $n\times n$ matrix M, and supporting queries that, for any vector v, returns the matrix-vector product Mv. This problem has been extensively studied in both theory and practice: on one side, practitioners have developed algorithms that are highly efficient in practice, whereas theoreticians have proven that the problem cannot be solved faster than naive multiplication in the worst-case. This lower bound holds even in the average-case, implying that existing average-case analyses cannot explain this gap between theory and practice. Therefore, we study the problem for structured matrices. We show that for $n\times n$ matrices of VC-dimension d, the matrix-vector multiplication problem can be solved with $\tilde{O}(n^2)$ preprocessing and $\tilde O(n^{2-1/d})$ query time. Given the low constant VC-dimensions observed in most real-world data, our results posit an explanation for why the problem can be solved so much faster in practice. Moreover, our bounds hold even if the matrix does not have a low VC-dimension, but is obtained by (possibly adversarially) corrupting at most a subquadratic number of entries of any unknown low VC-dimension matrix. Our results yield the first non-trivial upper bounds for many applications. In previous works, the online matrix-vector hypothesis (conjecturing that quadratic time is needed per query) was used to prove many conditional lower bounds, showing that it is impossible to compute and maintain high-accuracy estimates for shortest paths, Laplacian solvers, effective resistance, and triangle detection in graphs subject to node insertions and deletions in subquadratic time. Yet, via a reduction to our matrix-vector-multiplication result, we show we can maintain the aforementioned problems efficiently if the input is structured, providing the first subquadratic upper bounds in the high-accuracy regime.
Related papers
- Improving Algorithmic Efficiency using Cryptography [11.496343300483904]
We show how to use cryptography to improve the time complexity of solving computational problems.
We show that under standard cryptographic assumptions, we can design algorithms that are determinantally faster than existing ones.
arXiv Detail & Related papers (2025-02-18T17:08:59Z) - Quantum Time-Space Tradeoffs for Matrix Problems [0.5524804393257919]
We consider the time and space required for quantum computers to solve a range of problems involving matrices.
For almost all matrices $A$, we prove that quantum circuits with at most $T$ input queries and $S$ qubits of memory require $T=Omega(n2/S)$.
Because many of our lower bounds match deterministic algorithms with the same time and space complexity, we show that quantum computers cannot provide any advantage for these problems with any space bound.
arXiv Detail & Related papers (2024-01-10T18:38:43Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Fast Matrix Multiplication Without Tears: A Constraint Programming
Approach [8.52818380743467]
It is known that the multiplication of an $N times M$ matrix with an $M times P$ matrix can be performed using fewer multiplications than what the naive $NMP approach suggests.
This gives rise to the constraint satisfaction problem of fast matrix multiplication.
We propose a simple yet novel Constraint Programming approach to find non-commutative algorithms for fast matrix multiplication.
arXiv Detail & Related papers (2023-06-01T19:15:24Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - What if Neural Networks had SVDs? [66.91160214071088]
Various Neural Networks employ time-consuming matrix operations like matrix inversion.
We present an algorithm that is fast enough to speed up several matrix operations.
arXiv Detail & Related papers (2020-09-29T12:58:52Z) - Ranky : An Approach to Solve Distributed SVD on Large Sparse Matrices [0.0]
Singular Value Decomposition (SVD) is a well studied research topic in many fields and applications.
We propose Ranky, set of methods to solve rank problem on large and sparse matrices in a distributed manner.
arXiv Detail & Related papers (2020-09-21T11:36:28Z) - Vector-Matrix-Vector Queries for Solving Linear Algebra, Statistics, and
Graph Problems [58.83118651518438]
We consider the general problem of learning about a matrix through vector-matrix-vector queries.
These queries provide the value of $boldsymbolumathrmTboldsymbolMboldsymbolv$ over a fixed field.
We provide new upper and lower bounds for a wide variety of problems, spanning linear algebra, statistics, and graphs.
arXiv Detail & Related papers (2020-06-24T19:33:49Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.