Faster Probabilistic Error Cancellation
- URL: http://arxiv.org/abs/2506.04468v1
- Date: Wed, 04 Jun 2025 21:38:56 GMT
- Title: Faster Probabilistic Error Cancellation
- Authors: Yi-Hsiang Chen,
- Abstract summary: We propose a new method to perform PEC, which results in a lower sampling cost than the standard way.<n>We show the saving both analytically and numerically over the standard PEC.<n>We also demonstrated this method experimentally and found excellent agreement between the mitigated and the ideal values.
- Score: 6.418044102466421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probabilistic error cancellation (PEC) is a leading quantum error mitigation method that provides an unbiased estimate, although it is known to have a large sampling overhead. In this work, we propose a new method to perform PEC, which results in a lower sampling cost than the standard way. It works by decomposing the inverse channel of each gate or each circuit layer into the identity part and the non-identity part and reorganizing the full circuit as different powers of the inverse generator. The ideal circuit becomes a linear combination of noisy circuits with different weights where shots are deterministically allocated to each circuit based on its weight. This naturally sets the achievable bias given a finite amount of shots. As the number of shots is increased, smaller bias terms can be gradually resolved and become bias-free in the limit of sufficient shots. We show the saving both analytically and numerically over the standard PEC and identify situations where it can outperform heuristic approach, such as zero-noise extrapolation, due to the well-controlled bias. We also demonstrated this method experimentally and found excellent agreement between the mitigated and the ideal values.
Related papers
- One Sample is Enough to Make Conformal Prediction Robust [53.78604391939934]
We show that conformal prediction attains some robustness even with a forward pass on a single randomly perturbed input.<n>Our approach returns robust sets with smaller average set size compared to SOTA methods which use many (e.g. around 100) passes per input.
arXiv Detail & Related papers (2025-06-19T19:14:25Z) - Coherently mitigating boson samplers with stochastic errors [0.26388783516590225]
Quantum devices such as boson samplers are susceptible to various errors, including fabrication imperfections.<n>We propose a unitary averaging protocol which employs multiple boson samplers to generate a distribution that approximates the ideal boson sampler distribution.<n>This results in a rigorous upper bound on the trace distance between the output probability induced by invertible vacuum-heralded networks.
arXiv Detail & Related papers (2025-04-30T18:16:22Z) - Universal quantum computation via scalable measurement-free error correction [45.29832252085144]
We show that universal quantum computation can be made fault-tolerant in a scenario where the error-correction is implemented without mid-circuit measurements.<n>We introduce a measurement-free deformation protocol of the Bacon-Shor code to realize a logical $mathitCCZ$ gate.<n>In particular, our findings support that below-breakeven logical performance is achievable with a circuit-level error rate below $10-3$.
arXiv Detail & Related papers (2024-12-19T18:55:44Z) - Harmonic Path Integral Diffusion [0.4527270266697462]
We present a novel approach for sampling from a continuous multivariate probability distribution, which may either be explicitly known (up to a normalization factor) or represented via empirical samples.
Our method constructs a time-dependent bridge from a delta function centered at the origin of the state space at $t=0$, transforming it into the target distribution at $t=1$.
We contrast these algorithms with other sampling methods, particularly simulated and path integral sampling, highlighting their advantages in terms of analytical control, accuracy, and computational efficiency.
arXiv Detail & Related papers (2024-09-23T16:20:21Z) - Lightcone shading for classically accelerated quantum error mitigation [1.1801688624472007]
Quantum error mitigation (QEM) can recover accurate expectation values from a noisy quantum computer by trading off bias for variance.
Probabilistic error cancellation (PEC) stands out among QEM methods as an especially robust means of controllably eliminating bias.
We present an algorithm providing a practical benefit for some problems even with modest classical resources.
arXiv Detail & Related papers (2024-09-06T16:48:09Z) - Finding Transformer Circuits with Edge Pruning [71.12127707678961]
We propose Edge Pruning as an effective and scalable solution to automated circuit discovery.<n>Our method finds circuits in GPT-2 that use less than half the number of edges compared to circuits found by previous methods.<n>Thanks to its efficiency, we scale Edge Pruning to CodeLlama-13B, a model over 100x the scale that prior methods operate on.
arXiv Detail & Related papers (2024-06-24T16:40:54Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Trajectory-Aware Eligibility Traces for Off-Policy Reinforcement
Learning [44.50394347326546]
Off-policy learning from multistep returns is crucial for sample-efficient reinforcement learning.
Off-policy bias is corrected in a per-decision manner, but once a trace has been fully cut, the effect cannot be reversed.
We propose a multistep operator that can express both per-decision and trajectory-aware methods.
arXiv Detail & Related papers (2023-01-26T18:57:41Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Unifying and benchmarking state-of-the-art quantum error mitigation
techniques [0.6649973446180738]
In this work, we recognize that many state-of-the-art error mitigation methods share a common feature: they are data-driven.
We show that Virtual Distillation (VD) can be viewed in a similar manner by considering classical data produced from different numbers of state preparations.
Specifically, we employ a realistic noise model obtained from a trapped ion quantum computer to benchmark UNITED, as well as other state-of-the-art methods.
arXiv Detail & Related papers (2021-07-28T16:29:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.