Dequantization and Hardness of Spectral Sum Estimation
- URL: http://arxiv.org/abs/2509.20183v1
- Date: Wed, 24 Sep 2025 14:44:53 GMT
- Title: Dequantization and Hardness of Spectral Sum Estimation
- Authors: Roman Edenhofer, Atsuya Hasegawa, François Le Gall,
- Abstract summary: We give new dequantization and hardness results for estimating spectral sums of matrices, such as the logdeterminant.<n>We complement our classical upper bound with $mathsfDQC1$-completeness results for estimating specific spectral sums.
- Score: 1.0323063834827415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We give new dequantization and hardness results for estimating spectral sums of matrices, such as the log-determinant. Recent quantum algorithms have demonstrated that the logarithm of the determinant of sparse, well-conditioned, positive matrices can be approximated to $\varepsilon$-relative accuracy in time polylogarithmic in the dimension $N$, specifically in time $\mathrm{poly}(\mathrm{log}(N), s, \kappa, 1/\varepsilon)$, where $s$ is the sparsity and $\kappa$ the condition number of the input matrix. We provide a simple dequantization of these techniques that preserves the polylogarithmic dependence on the dimension. Our classical algorithm runs in time $\mathrm{polylog}(N)\cdot s^{O(\sqrt{\kappa}\log \kappa/\varepsilon)}$ which constitutes an exponential improvement over previous classical algorithms in certain parameter regimes. We complement our classical upper bound with $\mathsf{DQC1}$-completeness results for estimating specific spectral sums such as the trace of the inverse and the trace of matrix powers for log-local Hamiltonians, with parameter scalings analogous to those of known quantum algorithms. Assuming $\mathsf{BPP}\subsetneq\mathsf{DQC1}$, this rules out classical algorithms with the same scalings. It also resolves a main open problem of Cade and Montanaro (TQC 2018) concerning the complexity of Schatten-$p$ norm estimation. We further analyze a block-encoding input model, where instead of a classical description of a sparse matrix, we are given a block-encoding of it. We show $\mathsf{DQC}1$-completeness in a very general way in this model for estimating $\mathrm{tr}[f(A)]$ whenever $f$ and $f^{-1}$ are sufficiently smooth. We conclude our work with $\mathsf{BQP}$-hardness and $\mathsf{PP}$-completeness results for high-accuracy log-determinant estimation.
Related papers
- Spectral Gaps with Quantum Counting Queries and Oblivious State Preparation [47.600794349481966]
In this work, we present a quantum algorithm which approximates values up to additive error $epsilonDelta_k$ using a logarithmic number of qubits.<n>A key technical step in the analysis is the preparation of a suitable random initial state, which ultimately allows us to efficiently count the number of eigenvalues that are smaller than a threshold.
arXiv Detail & Related papers (2025-08-28T17:04:18Z) - A quantum algorithm for estimating the determinant [4.369550829556577]
The algorithm estimates the determinant of an $n times n$ positive sparse matrix to an accuracy $epsilon$ in time $cal O(log n/epsilon3)$.<n>The quantum spectral sampling algorithm generalizes to estimating any quantity $sum_j f(lambda_j)$, where $lambda_j$ are the matrix eigenvalues.
arXiv Detail & Related papers (2025-04-15T10:32:36Z) - Quantum algorithm for the gradient of a logarithm-determinant [0.0]
The inverse of a sparse-rank input operator may be determined efficiently.<n>The algorithm is envisioned for fully error-corrected quantum computers.<n>We discuss how this algorithm can be used for kernel-based quantum machine-learning.
arXiv Detail & Related papers (2025-01-16T09:39:31Z) - Fast and Practical Quantum-Inspired Classical Algorithms for Solving
Linear Systems [11.929584800629673]
We propose fast and practical quantum-inspired classical algorithms for solving linear systems.
Our main contribution is the application of the heavy ball momentum method to quantum-inspired classical algorithms for solving linear systems.
arXiv Detail & Related papers (2023-07-13T08:46:19Z) - Quantum Algorithms for Sampling Log-Concave Distributions and Estimating
Normalizing Constants [8.453228628258778]
We develop quantum algorithms for sampling logconcave distributions and for estimating their normalizing constants.
We exploit quantum analogs of the Monte Carlo method and quantum walks.
We also prove a $1/epsilon1-o(1)$ quantum lower bound for estimating normalizing constants.
arXiv Detail & Related papers (2022-10-12T19:10:43Z) - Near-optimal fitting of ellipsoids to random points [68.12685213894112]
A basic problem of fitting an ellipsoid to random points has connections to low-rank matrix decompositions, independent component analysis, and principal component analysis.
We resolve this conjecture up to logarithmic factors by constructing a fitting ellipsoid for some $n = Omega(, d2/mathrmpolylog(d),)$.
Our proof demonstrates feasibility of the least squares construction of Saunderson et al. using a convenient decomposition of a certain non-standard random matrix.
arXiv Detail & Related papers (2022-08-19T18:00:34Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - Low-degree learning and the metric entropy of polynomials [44.99833362998488]
We prove that any (deterministic or randomized) algorithm which learns $mathscrF_nd$ with $L$-accuracy $varepsilon$ requires at least $Omega(sqrtvarepsilon)2dlog n leq log mathsfM(mathscrF_n,d,|cdot|_L,varepsilon) satisfies the two-sided estimate $$c (1-varepsilon)2dlog
arXiv Detail & Related papers (2022-03-17T23:52:08Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Thresholded Lasso Bandit [70.17389393497125]
Thresholded Lasso bandit is an algorithm that estimates the vector defining the reward function as well as its sparse support.
We establish non-asymptotic regret upper bounds scaling as $mathcalO( log d + sqrtT )$ in general, and as $mathcalO( log d + sqrtT )$ under the so-called margin condition.
arXiv Detail & Related papers (2020-10-22T19:14:37Z) - Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
Less-Than-Single-Pass Optimization with Nearly Optimal Generalization [83.80460802169999]
We show that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
For loss factors, we prove that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
arXiv Detail & Related papers (2020-09-18T02:18:44Z) - Enhancing the Quantum Linear Systems Algorithm using Richardson
Extrapolation [0.8057006406834467]
We present a quantum algorithm to solve systems of linear equations of the form $Amathbfx=mathbfb$.
The algorithm achieves an exponential improvement with respect to $N$ over classical methods.
arXiv Detail & Related papers (2020-09-09T18:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.