Pauli error estimation via Population Recovery
- URL: http://arxiv.org/abs/2105.02885v2
- Date: Fri, 10 Sep 2021 04:01:27 GMT
- Title: Pauli error estimation via Population Recovery
- Authors: Steven T. Flammia and Ryan O'Donnell
- Abstract summary: We study the problem of learning a Pauli channel, or more generally the Pauli error rates of an arbitrary channel.
We give an extremely simple algorithm that learns the Pauli error rates of an $n$-qubit channel to precision $epsilon$ in $ell_infty$
We extend our algorithm to achieve multiplicative precision $1 pm epsilon$ (i.e., additive precision $epsilon eta$) using just $Obigl(frac1epsilon2 etabigr)
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Motivated by estimation of quantum noise models, we study the problem of
learning a Pauli channel, or more generally the Pauli error rates of an
arbitrary channel. By employing a novel reduction to the "Population Recovery"
problem, we give an extremely simple algorithm that learns the Pauli error
rates of an $n$-qubit channel to precision $\epsilon$ in $\ell_\infty$ using
just $O(1/\epsilon^2) \log(n/\epsilon)$ applications of the channel. This is
optimal up to the logarithmic factors. Our algorithm uses only unentangled
state preparation and measurements, and the post-measurement classical runtime
is just an $O(1/\epsilon)$ factor larger than the measurement data size. It is
also impervious to a limited model of measurement noise where heralded
measurement failures occur independently with probability $\le 1/4$.
We then consider the case where the noise channel is close to the identity,
meaning that the no-error outcome occurs with probability $1-\eta$. In the
regime of small $\eta$ we extend our algorithm to achieve multiplicative
precision $1 \pm \epsilon$ (i.e., additive precision $\epsilon \eta$) using
just $O\bigl(\frac{1}{\epsilon^2 \eta}\bigr) \log(n/\epsilon)$ applications of
the channel.
Related papers
- Efficient Pauli channel estimation with logarithmic quantum memory [9.275532709125242]
We show that a protocol can estimate the eigenvalues of a Pauli channel to error $epsilon$ using only $O(log n/epsilon2)$ ancilla and $tildeO(n2/epsilon2)$ measurements.
Our results imply, to our knowledge, the first quantum learning task where logarithmically many qubits of quantum memory suffice for an exponential statistical advantage.
arXiv Detail & Related papers (2023-09-25T17:53:12Z) - Distribution-Independent Regression for Generalized Linear Models with
Oblivious Corruptions [49.69852011882769]
We show the first algorithms for the problem of regression for generalized linear models (GLMs) in the presence of additive oblivious noise.
We present an algorithm that tackles newthis problem in its most general distribution-independent setting.
This is the first newalgorithmic result for GLM regression newwith oblivious noise which can handle more than half the samples being arbitrarily corrupted.
arXiv Detail & Related papers (2023-09-20T21:41:59Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Query-optimal estimation of unitary channels in diamond distance [3.087385668501741]
We consider process tomography for unitary quantum channels.
We output a classical description of a unitary that is $varepsilon$-close to the unknown unitary in diamond norm.
arXiv Detail & Related papers (2023-02-27T19:00:00Z) - Cryptographic Hardness of Learning Halfspaces with Massart Noise [59.8587499110224]
We study the complexity of PAC learning halfspaces in the presence of Massart noise.
We show that no-time Massart halfspace learners can achieve error better than $Omega(eta)$, even if the optimal 0-1 error is small.
arXiv Detail & Related papers (2022-07-28T17:50:53Z) - How to simulate quantum measurement without computing marginals [3.222802562733787]
We describe and analyze algorithms for classically computation measurement of an $n$-qubit quantum state $psi$ in the standard basis.
Our algorithms reduce the sampling task to computing poly(n)$ amplitudes of $n$-qubit states.
arXiv Detail & Related papers (2021-12-15T21:44:05Z) - Random quantum circuits transform local noise into global white noise [118.18170052022323]
We study the distribution over measurement outcomes of noisy random quantum circuits in the low-fidelity regime.
For local noise that is sufficiently weak and unital, correlations (measured by the linear cross-entropy benchmark) between the output distribution $p_textnoisy$ of a generic noisy circuit instance shrink exponentially.
If the noise is incoherent, the output distribution approaches the uniform distribution $p_textunif$ at precisely the same rate.
arXiv Detail & Related papers (2021-11-29T19:26:28Z) - Tight Bounds for the Randomized and Quantum Communication Complexities
of Equality with Small Error [1.6522364074260811]
We investigate the randomized and quantum communication complexities of the Equality function with small error probability $epsilon$.
We show that any $log(n/epsilon)-loglog(sqrtn/epsilon)+3$ protocol communicates at least $log(n/epsilon)-log(sqrtn/epsilon)-O(1)$ qubits.
arXiv Detail & Related papers (2021-07-25T13:52:42Z) - Fast Estimation of Sparse Quantum Noise [1.933681537640272]
We present a practical algorithm for estimating the $s$ nonzero Pauli error rates in an $s$-sparse, $n$-qubit Pauli noise channel.
We experimentally validate a version of the algorithm that uses simplified Clifford circuits on data from an IBM 14-qubit superconducting device.
arXiv Detail & Related papers (2020-07-15T18:00:01Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.