Improving Algorithmic Efficiency using Cryptography
- URL: http://arxiv.org/abs/2502.13065v2
- Date: Tue, 22 Apr 2025 17:51:42 GMT
- Title: Improving Algorithmic Efficiency using Cryptography
- Authors: Vinod Vaikuntanathan, Or Zamir,
- Abstract summary: We show how to use cryptography to improve the time complexity of solving computational problems.<n>We show that under standard cryptographic assumptions, we can design algorithms that are determinantally faster than existing ones.
- Score: 11.496343300483904
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cryptographic primitives have been used for various non-cryptographic objectives, such as eliminating or reducing randomness and interaction. We show how to use cryptography to improve the time complexity of solving computational problems. Specifically, we show that under standard cryptographic assumptions, we can design algorithms that are asymptotically faster than existing ones while maintaining correctness. As a concrete demonstration, we construct a distribution of trapdoored matrices with the following properties: (a) computationally bounded adversaries cannot distinguish a random matrix from one drawn from this distribution (under computational hardness assumptions), and (b) given a trapdoor, we can multiply such an $n \times n$ matrix with any vector in near-linear (in $n$) time. We provide constructions both over finite fields and over the reals. This enables a broad speedup technique: any algorithm relying on a random matrix -- such as those that use various notions of dimensionality reduction -- can replace it with a matrix from our distribution, achieving computational speedups while preserving correctness. Using these trapdoored matrices, we present the first uniform reduction from worst-case to approximate and average-case matrix multiplication with optimal parameters (improving on Hirahara--Shimizu STOC 2025, albeit under computational assumptions), the first worst-case to average-case reductions for matrix inversion, solving a linear system, and computing a determinant, as well as a speedup of inference time in classification models.
Related papers
- Tensor networks based quantum optimization algorithm [0.0]
In optimization, one of the well-known classical algorithms is power iterations.
We propose a quantum realiziation to circumvent this pitfall.
Our methodology becomes instance agnostic and thus allows one to address black-box optimization within the framework of quantum computing.
arXiv Detail & Related papers (2024-04-23T13:49:11Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - A fast Multiplicative Updates algorithm for Non-negative Matrix Factorization [2.646309221150203]
We propose to improve the Multiplicative Updates algorithm by crafting a tighter upper bound of the Hessian matrix for each alternate subproblem.
Convergence is still ensured and we observe in practice on both synthetic and real world dataset that the proposed fastMU algorithm is often several orders of magnitude faster than the regular Multiplicative Updates algorithm.
arXiv Detail & Related papers (2023-03-31T12:09:36Z) - Batch-efficient EigenDecomposition for Small and Medium Matrices [65.67315418971688]
EigenDecomposition (ED) is at the heart of many computer vision algorithms and applications.
We propose a QR-based ED method dedicated to the application scenarios of computer vision.
arXiv Detail & Related papers (2022-07-09T09:14:12Z) - A quantum algorithm for solving eigenproblem of the Laplacian matrix of
a fully connected weighted graph [4.045204834863644]
We propose an efficient quantum algorithm to solve the eigenproblem of the Laplacian matrix of a fully connected weighted graph.
Specifically, we adopt the optimal Hamiltonian simulation technique based on the block-encoding framework.
We also show that our algorithm can be extended to solve the eigenproblem of symmetric (non-symmetric) normalized Laplacian matrix.
arXiv Detail & Related papers (2022-03-28T02:24:08Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Sublinear Time Approximation of Text Similarity Matrices [50.73398637380375]
We introduce a generalization of the popular Nystr"om method to the indefinite setting.
Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix.
We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices.
arXiv Detail & Related papers (2021-12-17T17:04:34Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Fast and Accurate Pseudoinverse with Sparse Matrix Reordering and
Incremental Approach [4.710916891482697]
A pseudoinverse is a generalization of a matrix inverse, which has been extensively utilized in machine learning.
FastPI is a novel incremental singular value decomposition (SVD) based pseudoinverse method for sparse matrices.
We show that FastPI computes the pseudoinverse faster than other approximate methods without loss of accuracy.
arXiv Detail & Related papers (2020-11-09T07:47:10Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.