Locality and Error Mitigation of Quantum Circuits
- URL: http://arxiv.org/abs/2303.06496v1
- Date: Sat, 11 Mar 2023 20:43:36 GMT
- Title: Locality and Error Mitigation of Quantum Circuits
- Authors: Minh C. Tran, Kunal Sharma, Kristan Temme
- Abstract summary: We study and improve two leading error mitigation techniques, namely Probabilistic Error Cancellation (PEC) and Zero-Noise Extrapolation (ZNE)
For PEC, we introduce a new estimator that takes into account the light cone of the unitary circuit with respect to a target local observable.
- Score: 0.7366405857677226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we study and improve two leading error mitigation techniques,
namely Probabilistic Error Cancellation (PEC) and Zero-Noise Extrapolation
(ZNE), for estimating the expectation value of local observables. For PEC, we
introduce a new estimator that takes into account the light cone of the unitary
circuit with respect to a target local observable. Given a fixed error
tolerance, the sampling overhead for the new estimator can be several orders of
magnitude smaller than the standard PEC estimators. For ZNE, we also use
light-cone arguments to establish an error bound that closely captures the
behavior of the bias that remains after extrapolation.
Related papers
- Lightcone shading for classically accelerated quantum error mitigation [1.1801688624472007]
Quantum error mitigation (QEM) can recover accurate expectation values from a noisy quantum computer by trading off bias for variance.
Probabilistic error cancellation (PEC) stands out among QEM methods as an especially robust means of controllably eliminating bias.
We present an algorithm providing a practical benefit for some problems even with modest classical resources.
arXiv Detail & Related papers (2024-09-06T16:48:09Z) - Pauli Check Extrapolation for Quantum Error Mitigation [8.436934066461625]
Pauli Check Extrapolation (PCE) integrates PCS with an extrapolation technique similar to Zero-Noise Extrapolation (ZNE)
We show that PCE can achieve higher fidelities than the state-of-the-art Robust Shadow (RS) estimation scheme.
arXiv Detail & Related papers (2024-06-20T22:07:42Z) - Symmetric Q-learning: Reducing Skewness of Bellman Error in Online
Reinforcement Learning [55.75959755058356]
In deep reinforcement learning, estimating the value function is essential to evaluate the quality of states and actions.
A recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator.
We proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution.
arXiv Detail & Related papers (2024-03-12T14:49:19Z) - Purity-Assisted Zero-Noise Extrapolation for Quantum Error Mitigation [10.577215927026199]
A purity-assisted zero-noise extrapolation (pZNE) method is utilized to address limitations in error rate assumptions.
The pZNE method does not significantly reduce the bias of routine ZNE.
It extends its effectiveness to a wider range of error rates where routine ZNE may face limitations.
arXiv Detail & Related papers (2023-10-16T03:46:30Z) - Algorithmic error mitigation for quantum eigenvalues estimation [0.9002260638342727]
Even fault-tolerant computers will be subject to algorithmic errors when estimating eigenvalues.
We propose an error mitigation strategy that enables a reduction of the algorithmic errors.
Our results promise accurate eigenvalue estimation even in early fault-tolerant devices with limited number of qubits.
arXiv Detail & Related papers (2023-08-07T19:16:54Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
Estimator [93.05919133288161]
We show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization.
This provably reduces the mean squared error.
We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
arXiv Detail & Related papers (2020-10-09T22:54:38Z) - Error mitigation via verified phase estimation [0.25295633594332334]
This paper presents a new error mitigation technique based on quantum phase estimation.
We show that it can be adapted to function without the use of control qubits.
arXiv Detail & Related papers (2020-10-06T07:44:10Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing
its Gradient Estimator Bias [65.13042449121411]
In practice, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST.
We show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon.
We apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error.
arXiv Detail & Related papers (2020-06-06T09:36:07Z) - Nonparametric Estimation of the Fisher Information and Its Applications [82.00720226775964]
This paper considers the problem of estimation of the Fisher information for location from a random sample of size $n$.
An estimator proposed by Bhattacharya is revisited and improved convergence rates are derived.
A new estimator, termed a clipped estimator, is proposed.
arXiv Detail & Related papers (2020-05-07T17:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.