Quantum Algorithm for Matrix Logarithm by Integral Formula
- URL: http://arxiv.org/abs/2111.08914v1
- Date: Wed, 17 Nov 2021 05:46:12 GMT
- Title: Quantum Algorithm for Matrix Logarithm by Integral Formula
- Authors: Songling Zhang, Hua Xiang
- Abstract summary: Recently, a quantum algorithm that computes the state $|frangle$ corresponding to matrix-vector product $f(A)b$ is proposed.
We propose a quantum algorithm, which uses LCU method and block-encoding technique as subroutines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The matrix logarithm is one of the important matrix functions. Recently, a
quantum algorithm that computes the state $|f\rangle$ corresponding to
matrix-vector product $f(A)b$ is proposed in [Takahira, et al. Quantum
algorithm for matrix functions by Cauchy's integral formula, QIC, Vol.20,
No.1\&2, pp.14-36, 2020]. However, it can not be applied to matrix logarithm.
In this paper, we propose a quantum algorithm, which uses LCU method and
block-encoding technique as subroutines, to compute the state $|f\rangle =
\log(A)|b\rangle / \|\log(A)|b\rangle\|$ corresponding to $\log(A)b$ via the
integral representation of $\log(A)$ and the Gauss-Legendre quadrature rule.
Related papers
- A Quantum Speed-Up for Approximating the Top Eigenvectors of a Matrix [2.7050250604223693]
Finding a good approximation of the top eigenvector of a given $dtimes d$ matrix $A$ is a basic and important computational problem.
We give two different quantum algorithms that output a classical description of a good approximation of the top eigenvector.
arXiv Detail & Related papers (2024-05-23T16:33:13Z) - Quantum algorithms for calculating determinant and inverse of matrix and solving linear algebraic systems [43.53835128052666]
We propose quantum algorithms for calculating the determinant and inverse of an $(N-1)times (N-1)$ matrix.
The basic idea is to encode each row of the matrix into a pure state of some quantum system.
arXiv Detail & Related papers (2024-01-29T23:23:27Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Quantum algorithm for matrix functions by Cauchy's integral formula [1.399948157377307]
We consider the problem of obtaining quantum state $lvert f rangle$ corresponding to vector $f(A)boldsymbolb$.
We propose a quantum algorithm that uses Cauchy's integral formula and the trapezoidal rule as an approach that avoids eigenvalue estimation.
arXiv Detail & Related papers (2021-06-15T12:10:16Z) - Efficient algorithm for generating Pauli coordinates for an arbitrary
linear operator [0.0]
We present an efficient algorithm that for our particular basis only involves $mathcal O(mathrm N2logmathrm N)$ operations.
Because this algorithm requires fewer than $mathcal O(mathrm N3)$ operations, for large $mathrm N$, it could be used as a preprocessing step for quantum computing algorithms.
arXiv Detail & Related papers (2020-11-17T20:57:39Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Thresholded Lasso Bandit [70.17389393497125]
Thresholded Lasso bandit is an algorithm that estimates the vector defining the reward function as well as its sparse support.
We establish non-asymptotic regret upper bounds scaling as $mathcalO( log d + sqrtT )$ in general, and as $mathcalO( log d + sqrtT )$ under the so-called margin condition.
arXiv Detail & Related papers (2020-10-22T19:14:37Z) - Hutch++: Optimal Stochastic Trace Estimation [75.45968495410048]
We introduce a new randomized algorithm, Hutch++, which computes a $(1 pm epsilon)$ approximation to $tr(A)$ for any positive semidefinite (PSD) $A$.
We show that it significantly outperforms Hutchinson's method in experiments.
arXiv Detail & Related papers (2020-10-19T16:45:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.