On estimating the trace of quantum state powers
- URL: http://arxiv.org/abs/2410.13559v1
- Date: Thu, 17 Oct 2024 13:57:13 GMT
- Title: On estimating the trace of quantum state powers
- Authors: Yupan Liu, Qisheng Wang,
- Abstract summary: We investigate the computational complexity of estimating the trace of quantum state powers $texttr(rhoq)$ for an $n$-qubit mixed quantum state $rho$.
Our speedup is achieved by introducing efficiently computable uniform approximations of positive power functions into quantum singular value transformation.
- Score: 2.637436382971936
- License:
- Abstract: We investigate the computational complexity of estimating the trace of quantum state powers $\text{tr}(\rho^q)$ for an $n$-qubit mixed quantum state $\rho$, given its state-preparation circuit of size $\text{poly}(n)$. This quantity is closely related to and often interchangeable with the Tsallis entropy $\text{S}_q(\rho) = \frac{1-\text{tr}(\rho^q)}{q-1}$, where $q = 1$ corresponds to the von Neumann entropy. For any non-integer $q \geq 1 + \Omega(1)$, we provide a quantum estimator for $\text{S}_q(\rho)$ with time complexity $\text{poly}(n)$, exponentially improving the prior best results of $\exp(n)$ due to Acharya, Issa, Shende, and Wagner (ISIT 2019), Wang, Guan, Liu, Zhang, and Ying (TIT 2024), and Wang, Zhang, and Li (TIT 2024), and Wang and Zhang (ESA 2024). Our speedup is achieved by introducing efficiently computable uniform approximations of positive power functions into quantum singular value transformation. Our quantum algorithm reveals a sharp phase transition between the case of $q=1$ and constant $q>1$ in the computational complexity of the Quantum $q$-Tsallis Entropy Difference Problem (TsallisQED$_q$), particularly deciding whether the difference $\text{S}_q(\rho_0) - \text{S}_q(\rho_1)$ is at least $0.001$ or at most $-0.001$: - For any $1+\Omega(1) \leq q \leq 2$, TsallisQED$_q$ is $\mathsf{BQP}$-complete, which implies that Purity Estimation is also $\mathsf{BQP}$-complete. - For any $1 \leq q \leq 1 + \frac{1}{n-1}$, TsallisQED$_q$ is $\mathsf{QSZK}$-hard, leading to hardness of approximating von Neumann entropy because $\text{S}_q(\rho) \leq \text{S}(\rho)$, as long as $\mathsf{BQP} \subsetneq \mathsf{QSZK}$. The hardness results are derived from reductions based on new inequalities for the quantum $q$-Jensen-(Shannon-)Tsallis divergence with $1\leq q \leq 2$, which are of independent interest.
Related papers
- Measuring quantum relative entropy with finite-size effect [53.64687146666141]
We study the estimation of relative entropy $D(rho|sigma)$ when $sigma$ is known.
Our estimator attains the Cram'er-Rao type bound when the dimension $d$ is fixed.
arXiv Detail & Related papers (2024-06-25T06:07:20Z) - A shortcut to an optimal quantum linear system solver [55.2480439325792]
We give a conceptually simple quantum linear system solvers (QLSS) that does not use complex or difficult-to-analyze techniques.
If the solution norm $lVertboldsymbolxrVert$ is known exactly, our QLSS requires only a single application of kernel.
Alternatively, by reintroducing a concept from the adiabatic path-following technique, we show that $O(kappa)$ complexity can be achieved for norm estimation.
arXiv Detail & Related papers (2024-06-17T20:54:11Z) - Time-Efficient Quantum Entropy Estimator via Samplizer [7.319050391449301]
Estimating the entropy of a quantum state is a basic problem in quantum information.
We introduce a time-efficient quantum approach to estimating the von Neumann entropy $S(rho)$ and R'enyi entropy $S_alpha(rho)$ of an $N$-dimensional quantum state $rho.
arXiv Detail & Related papers (2024-01-18T12:50:20Z) - Basic quantum subroutines: finding multiple marked elements and summing
numbers [1.1265248232450553]
We show how to find all $k$ marked elements in a list of size $N$ using the optimal number $O(sqrtN k)$ of quantum queries.
arXiv Detail & Related papers (2023-02-20T19:11:44Z) - Enlarging the notion of additivity of resource quantifiers [62.997667081978825]
Given a quantum state $varrho$ and a quantifier $cal E(varrho), it is a hard task to determine $cal E(varrhootimes N)$.
We show that the one shot distillable entanglement of certain spherically symmetric states can be quantitatively approximated by such an augmented additivity.
arXiv Detail & Related papers (2022-07-31T00:23:10Z) - Quantum algorithms from fluctuation theorems: Thermal-state preparation [0.09786690381850353]
We present a quantum algorithm to prepare a purification of the thermal state of $H_$ at inverse temperature.
The dependence of the complexity in $epsilon$ varies according to the structure of the quantum systems.
We analyze the complexity for preparing the thermal state of the transverse field Ising model using different non-equilibrium unitary processes.
arXiv Detail & Related papers (2022-03-16T18:55:12Z) - Optimal (controlled) quantum state preparation and improved unitary
synthesis by quantum circuits with any number of ancillary qubits [20.270300647783003]
Controlled quantum state preparation (CQSP) aims to provide the transformation of $|irangle |0nrangle to |irangle |psi_irangle $ for all $iin 0,1k$ for the given $n$-qubit states.
We construct a quantum circuit for implementing CQSP, with depth $Oleft(n+k+frac2n+kn+k+mright)$ and size $Oleft(2n+kright)$
arXiv Detail & Related papers (2022-02-23T04:19:57Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.