Quantum Advantage via Efficient Post-processing on Qudit Shadow tomography
- URL: http://arxiv.org/abs/2408.16244v5
- Date: Tue, 08 Apr 2025 20:25:33 GMT
- Title: Quantum Advantage via Efficient Post-processing on Qudit Shadow tomography
- Authors: Yu Wang,
- Abstract summary: inner products of the form ( operatornametr(AB) ) are fundamental across quantum science and artificial intelligence.<n>We introduce a quantum approach based on qudit classical shadow tomography, significantly reducing computational complexity.<n>Our results establish a practical and modular quantum subroutine, enabling scalable quantum advantages in tasks involving high-dimensional data analysis and processing.
- Score: 3.19428095493284
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Computing inner products of the form \( \operatorname{tr}(AB) \), where \( A \) is a \( d \)-dimensional density matrix (with \( \operatorname{tr}(A) = 1 \), \( A \geq 0 \)) and \( B \) is a bounded-norm observable (Hermitian with \( \operatorname{tr}(B^2) \le O(\mathrm{poly}(\log d)) \) and \( \operatorname{tr}(B) \) known), is fundamental across quantum science and artificial intelligence. Classically, both computing and storing such inner products require $O(d^2)$ resources, which rapidly becomes prohibitive as $d$ grows exponentially. In this work, we introduce a quantum approach based on qudit classical shadow tomography, significantly reducing computational complexity from $O(d^2)$ down to $O(\mathrm{poly}(\log d))$ in typical cases and at least to $O(d~ \text{poly}(\log d))$ in the worst case. Specifically, for \(n\)-qubit systems (with $n$ being the number of qubit and \(d = 2^n\)), our method guarantees efficient estimation of \(\operatorname{tr}(\rho O)\) for any known stabilizer state \(\rho\) and arbitrary bounded-norm observable \(O\), using polynomial computational resources. Crucially, it ensures constant-time classical post-processing per measurement and supports qubit and qudit platforms. Moreover, classical storage complexity of $A$ reduces from $O(d^2)$ to $O(m \log d)$, where the sample complexity $m$ is typically exponentially smaller than $d^2$. Our results establish a practical and modular quantum subroutine, enabling scalable quantum advantages in tasks involving high-dimensional data analysis and processing.
Related papers
- Quantum oracles for the finite element method [45.200826131319815]
This study examines the quantum routines required for the implementation of oracles used in the block-encoding of the $N times N stiffness and mass matrices.
We show how to construct the necessary oracles, which require the calculation of element geometry, square root and the implementation of conditional operations.
arXiv Detail & Related papers (2025-04-28T14:28:31Z) - Estimating quantum amplitudes can be exponentially improved [11.282486674587236]
Estimating quantum amplitudes is a fundamental task in quantum computing.
We present a novel framework for estimating quantum amplitudes by transforming pure states into their matrix forms.
Our framework achieves the standard quantum limit $epsilon-2$ and the Heisenberg limit $epsilon-1$, respectively.
arXiv Detail & Related papers (2024-08-25T04:35:53Z) - Rank Is All You Need: Estimating the Trace of Powers of Density Matrices [1.5133368155322298]
Estimating the trace of powers of identical $k$ density matrices is a crucial subroutine for many applications.
Inspired by the Newton-Girard method, we developed an algorithm that uses only $mathcalO(r)$ qubits and $mathcalO(r)$ multi-qubit gates.
arXiv Detail & Related papers (2024-08-01T06:23:52Z) - Measuring quantum relative entropy with finite-size effect [53.64687146666141]
We study the estimation of relative entropy $D(rho|sigma)$ when $sigma$ is known.
Our estimator attains the Cram'er-Rao type bound when the dimension $d$ is fixed.
arXiv Detail & Related papers (2024-06-25T06:07:20Z) - Stochastic Quantum Sampling for Non-Logconcave Distributions and
Estimating Partition Functions [13.16814860487575]
We present quantum algorithms for sampling from nonlogconcave probability distributions.
$f$ can be written as a finite sum $f(x):= frac1Nsum_k=1N f_k(x)$.
arXiv Detail & Related papers (2023-10-17T17:55:32Z) - Spacetime-Efficient Low-Depth Quantum State Preparation with
Applications [93.56766264306764]
We show that a novel deterministic method for preparing arbitrary quantum states requires fewer quantum resources than previous methods.
We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations.
arXiv Detail & Related papers (2023-03-03T18:23:20Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Solving the semidefinite relaxation of QUBOs in matrix multiplication
time, and faster with a quantum computer [0.20999222360659603]
We show that some quantum SDO solvers provide speedups in the low-precision regime.
We exploit this fact to exponentially improve the dependence of their algorithm on precision.
A quantum implementation of our algorithm exhibits a worst case running time of $mathcalO left(ns + n1.5 cdot textpolylog left(n, | C |_F, frac1epsilon right)$.
arXiv Detail & Related papers (2023-01-10T23:12:05Z) - Faster Stochastic First-Order Method for Maximum-Likelihood Quantum
State Tomography [10.758021887982782]
In quantum state tomography, the sample size and dimension grow exponentially with the number of qubits.
We propose an algorithm called mirror descent with the Burg-likelihood entropy.
To the best of our knowledge, this is currently the fastest, first-order method for maximum-likelihood quantum state tomography.
arXiv Detail & Related papers (2022-11-23T11:35:47Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Classical shadows of fermions with particle number symmetry [0.0]
We provide an estimator for any $k$-RDM with $mathcalO(k2eta)$ classical complexity.
Our method, in the worst-case of half-filling, still provides a factor of $4k$ advantage in sample complexity.
arXiv Detail & Related papers (2022-08-18T17:11:12Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - Exponential Separation between Quantum and Classical Ordered Binary
Decision Diagrams, Reordering Method and Hierarchies [68.93512627479197]
We study quantum Ordered Binary Decision Diagrams($OBDD$) model.
We prove lower bounds and upper bounds for OBDD with arbitrary order of input variables.
We extend hierarchy for read$k$-times Ordered Binary Decision Diagrams ($k$-OBDD$) of width.
arXiv Detail & Related papers (2022-04-22T12:37:56Z) - Constrained mixers for the quantum approximate optimization algorithm [55.41644538483948]
We present a framework for constructing mixing operators that restrict the evolution to a subspace of the full Hilbert space.
We generalize the "XY"-mixer designed to preserve the subspace of "one-hot" states to the general case of subspaces given by a number of computational basis states.
Our analysis also leads to valid Trotterizations for "XY"-mixer with fewer CX gates than is known to date.
arXiv Detail & Related papers (2022-03-11T17:19:26Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Divide-and-conquer verification method for noisy intermediate-scale
quantum computation [0.0]
noisy intermediate-scale quantum computations can be regarded as logarithmic-depth quantum circuits on a sparse quantum computing chip.
We propose a method to efficiently verify such noisy intermediate-scale quantum computation.
arXiv Detail & Related papers (2021-09-30T08:56:30Z) - Improved quantum data analysis [1.8416014644193066]
We give a quantum "Threshold Search" algorithm that requires only $O(log2 m)/epsilon2)$ samples of a $d$-dimensional state.
We also give an alternative Hypothesis Selection method using $tildeO((log3 m)/epsilon2)$ samples.
arXiv Detail & Related papers (2020-11-22T01:22:37Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
Less-Than-Single-Pass Optimization with Nearly Optimal Generalization [83.80460802169999]
We show that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
For loss factors, we prove that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
arXiv Detail & Related papers (2020-09-18T02:18:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.