The Ubiquitous Sparse Matrix-Matrix Products
- URL: http://arxiv.org/abs/2508.04077v1
- Date: Wed, 06 Aug 2025 04:26:52 GMT
- Title: The Ubiquitous Sparse Matrix-Matrix Products
- Authors: Aydın Buluç,
- Abstract summary: multiplication of a sparse matrix with another (dense or sparse) matrix is a fundamental operation that captures the computational patterns of many data science applications.<n>We provide a unifying treatment of the sparse matrix-matrix operation and its rich application space including machine learning, computational biology and chemistry, graph algorithms, and scientific computing.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiplication of a sparse matrix with another (dense or sparse) matrix is a fundamental operation that captures the computational patterns of many data science applications, including but not limited to graph algorithms, sparsely connected neural networks, graph neural networks, clustering, and many-to-many comparisons of biological sequencing data. In many application scenarios, the matrix multiplication takes places on an arbitrary algebraic semiring where the scalar operations are overloaded with user-defined functions with certain properties or a more general heterogenous algebra where even the domains of the input matrices can be different. Here, we provide a unifying treatment of the sparse matrix-matrix operation and its rich application space including machine learning, computational biology and chemistry, graph algorithms, and scientific computing.
Related papers
- Spectral Estimation with Free Decompression [47.81955761814048]
We introduce a novel method of "free decompression" to estimate the spectrum of very large (impalpable) matrices.<n>Our method can be used to extrapolate from the empirical spectral densities of small submatrices to infer the eigenspectrum of extremely large (impalpable) matrices.
arXiv Detail & Related papers (2025-06-13T17:49:25Z) - Fast Homomorphic Linear Algebra with BLAS [11.269481520748839]
Homomorphic encryption opens a wide range of applications in privacy-preserving data manipulation, notably in AI.<n>Many of those applications require significant linear algebra computations (matrix x vector products, and matrix x matrix products).<n>This central role of linear algebra computations goes far beyond homomorphic algebra and applies to most areas of scientific computing.<n>We show that the efficiency loss between CKKS-based encrypted square matrix multiplication and double-precision floating-point square matrix multiplication is a mere 4-12 factor, depending on the precise situation.
arXiv Detail & Related papers (2025-03-20T12:19:47Z) - Graph Neural Networks and Applied Linear Algebra [1.8749305679160366]
Graph neural networks (GNNs) are an approach suitable to sparse matrix computations.
This paper provides an introduction to GNNs for a numerical linear algebra audience.
Concrete examples are provided to illustrate how many common linear algebra tasks can be accomplished using GNNs.
arXiv Detail & Related papers (2023-10-21T18:37:56Z) - CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - Multiresolution kernel matrix algebra [0.0]
We show the compression of kernel matrices by means of samplets produces optimally sparse matrices in a certain S-format.
The inverse of a kernel matrix (if it exists) is compressible in the S-format as well.
The matrix algebra is justified mathematically by pseudo differential calculus.
arXiv Detail & Related papers (2022-11-21T17:50:22Z) - A Structured Sparse Neural Network and Its Matrix Calculations Algorithm [0.0]
We introduce a nonsymmetric, tridiagonal matrix with offdiagonal sparse entries and offset sub and super-diagonals.
For the cases where the matrix inverse does not exist, a least square type pseudoinverse is provided.
Results show significant improvement in computational costs specially when the size of matrix increases.
arXiv Detail & Related papers (2022-07-02T19:38:48Z) - Quantum algorithms for matrix operations and linear systems of equations [65.62256987706128]
We propose quantum algorithms for matrix operations using the "Sender-Receiver" model.
These quantum protocols can be used as subroutines in other quantum schemes.
arXiv Detail & Related papers (2022-02-10T08:12:20Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - What if Neural Networks had SVDs? [66.91160214071088]
Various Neural Networks employ time-consuming matrix operations like matrix inversion.
We present an algorithm that is fast enough to speed up several matrix operations.
arXiv Detail & Related papers (2020-09-29T12:58:52Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.