Matrix inversion polynomials for the quantum singular value transformation
- URL: http://arxiv.org/abs/2507.15537v1
- Date: Mon, 21 Jul 2025 12:04:56 GMT
- Title: Matrix inversion polynomials for the quantum singular value transformation
- Authors: Christoph Sünderhauf, Zalán Németh, Adnaan Walayat, Andrew Patterson, Bjorn K. Berntson,
- Abstract summary: A quantum inversion matrix with the quantum singular value transformation (QSVT) requires an approximation to $1/x$.<n>Several methods from the literature achieve the known degree complexity $mathcalO(kappalog(kappa/varepsilon))$ with condition number $kappa$ and uniform error $varepsilon$.<n>Here, we derive an analytic shortcut to the optimal, based on Taylor expansion, Chebyshev and convex optimization.
- Score: 3.718165623644259
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum matrix inversion with the quantum singular value transformation (QSVT) requires a polynomial approximation to $1/x$. Several methods from the literature construct polynomials that achieve the known degree complexity $\mathcal{O}(\kappa\log(\kappa/\varepsilon))$ with condition number $\kappa$ and uniform error $\varepsilon$. However, the \emph{optimal} polynomial with lowest degree for fixed error $\varepsilon$ can only be approximated numerically with the resource-intensive Remez method, leading to impractical preprocessing runtimes. Here, we derive an analytic shortcut to the optimal polynomial. Comparisons with other polynomials from the literature, based on Taylor expansion, Chebyshev iteration, and convex optimization, confirm that our result is optimal. Furthermore, for large $\kappa\log(\kappa/\varepsilon)$, our polynomial has the smallest maximum value on $[-1,1]$ of all approaches considered, leading to reduced circuit depth due to the normalization condition of QSVT. With the Python code provided, this paper will also be useful for practitioners in the field.
Related papers
- Two exact quantum signal processing results [0.0]
Quantum signal processing (QSP) is a framework for implementing certain functions via quantum circuits.<n>To construct a QSP circuit, one needs a target $P(z)$, which must satisfy $lvert P(z)rvertleq 1 on the complex unit circle $mathbb$.
arXiv Detail & Related papers (2025-05-15T21:13:23Z) - Learning and Computation of $Φ$-Equilibria at the Frontier of Tractability [85.07238533644636]
$Phi$-equilibria is a powerful and flexible framework at the heart of online learning and game theory.<n>We show that an efficient online algorithm incurs average $Phi$-regret at most $epsilon$ using $textpoly(d, k)/epsilon2$ rounds.<n>We also show nearly matching lower bounds in the online setting, thereby obtaining for the first time a family of deviations that captures the learnability of $Phi$-regret.
arXiv Detail & Related papers (2025-02-25T19:08:26Z) - Parallel Quantum Signal Processing Via Polynomial Factorization [3.1981483719988235]
We develop a Quantum Parallel Signal Processing algorithm.
Our algorithm parallelizes the computation of $texttr (P(rho)$ over $k$ systems and reduces the query depth to $d/k$, thus enabling a family of time-space tradeoffs for QSP.
This furnishes a property estimation suitable for quantum computers, and is realized at the expense of increasing the number of measurements by factor $O(textpoly(d) 2(k) )$.
arXiv Detail & Related papers (2024-09-27T17:54:30Z) - A shortcut to an optimal quantum linear system solver [55.2480439325792]
We give a conceptually simple quantum linear system solvers (QLSS) that does not use complex or difficult-to-analyze techniques.
If the solution norm $lVertboldsymbolxrVert$ is known exactly, our QLSS requires only a single application of kernel.
Alternatively, by reintroducing a concept from the adiabatic path-following technique, we show that $O(kappa)$ complexity can be achieved for norm estimation.
arXiv Detail & Related papers (2024-06-17T20:54:11Z) - Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$<n>We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ with a complexity that is not governed by information exponents.
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Efficient Solution of Point-Line Absolute Pose [52.775981113238046]
We revisit certain problems of pose estimation based on 3D--2D correspondences between features which may be points or lines.
We show experimentally that the resulting solvers are numerically stable and fast.
arXiv Detail & Related papers (2024-04-25T12:09:16Z) - An Improved Classical Singular Value Transformation for Quantum Machine Learning [2.3326951882644553]
We study quantum speedups in quantum machine learning (QML) by analyzing the quantum singular value transformation (QSVT) framework.
Our key insight is to combine the Clenshaw recurrence, an iterative method for computing matrix stability, with sketching techniques to simulate QSVT classically.
arXiv Detail & Related papers (2023-03-02T18:53:03Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - On the energy landscape of symmetric quantum signal processing [1.5301252700705212]
We prove that one particular global (called the global minimum) solution belongs to a $0$ on which the cost function is on which the cost function is a global.
We provide an explanation of a partial explanation of optimization algorithms.
arXiv Detail & Related papers (2021-10-11T04:16:48Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm [64.13217062232874]
SURF is an algorithm for approximating distributions by piecewises.
It outperforms state-of-the-art algorithms in experiments.
arXiv Detail & Related papers (2020-02-22T01:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.