A Logarithmic Bayesian Approach to Quantum Error Detection
- URL: http://arxiv.org/abs/2110.10732v3
- Date: Sat, 26 Mar 2022 03:53:43 GMT
- Title: A Logarithmic Bayesian Approach to Quantum Error Detection
- Authors: Ian Convy and K. Birgitta Whaley
- Abstract summary: We propose a pair of digital filters using logarithmic probabilities to achieve near-optimal performance on a three-qubit bit-flip code.
These filters are approximations of an optimal filter that we derive explicitly for finite time steps.
We demonstrate that the single-term and two-term filters are able to significantly outperform both a double threshold scheme and a linearized version of the Wonham filter.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of continuous quantum error correction from a
Bayesian perspective, proposing a pair of digital filters using logarithmic
probabilities that are able to achieve near-optimal performance on a
three-qubit bit-flip code, while still being reasonable to implement on
low-latency hardware. These practical filters are approximations of an optimal
filter that we derive explicitly for finite time steps, in contrast with
previous work that has relied on stochastic differential equations such as the
Wonham filter. By utilizing logarithmic probabilities, we are able to eliminate
the need for explicit normalization and can reduce the Gaussian noise
distribution to a simple quadratic expression. The state transitions induced by
the bit-flip errors are modeled using a Markov chain, which for
log-probabilties must be evaluated using a LogSumExp function. We develop the
two versions of our filter by constraining this LogSumExp to have either one or
two inputs, which favors either simplicity or accuracy, respectively. Using
simulated data, we demonstrate that the single-term and two-term filters are
able to significantly outperform both a double threshold scheme and a
linearized version of the Wonham filter in tests of error detection under a
wide variety of error rates and time steps.
Related papers
- Straightness of Rectified Flow: A Theoretical Insight into Wasserstein Convergence [54.580605276017096]
Rectified Flow (RF) aims to learn straight flow trajectories from noise to data using a sequence of convex optimization problems.
RF theoretically straightens the trajectory through successive rectifications, reducing the number of evaluations function (NFEs) while sampling.
We provide the first theoretical analysis of the Wasserstein distance between the sampling distribution of RF and the target distribution.
arXiv Detail & Related papers (2024-10-19T02:36:11Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Stochastic Quantum Sampling for Non-Logconcave Distributions and
Estimating Partition Functions [13.16814860487575]
We present quantum algorithms for sampling from nonlogconcave probability distributions.
$f$ can be written as a finite sum $f(x):= frac1Nsum_k=1N f_k(x)$.
arXiv Detail & Related papers (2023-10-17T17:55:32Z) - Streaming quantum gate set tomography using the extended Kalman filter [0.0]
We apply the extended Kalman filter to data from quantum gate set tomography to provide a streaming estimator of the both the system error model and its uncertainties.
With our method, a standard laptop can process one- and two-qubit circuit outcomes and update gate set error model at rates comparable to current experimental execution.
arXiv Detail & Related papers (2023-06-26T23:51:08Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Partial randomized benchmarking [0.0]
In randomized benchmarking of quantum logical gates, partial twirling can be used for simpler implementation, better scaling, and higher accuracy and reliability.
We analyze such simplified, partial twirling and demonstrate that, unlike for the standard randomized benchmarking, the measured decay of fidelity is a linear combination of exponentials with different decay rates.
arXiv Detail & Related papers (2021-11-07T22:15:11Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - One-Bit Compressed Sensing via One-Shot Hard Thresholding [7.594050968868919]
A problem of 1-bit compressed sensing is to estimate a sparse signal from a few binary measurements.
We present a novel and concise analysis that moves away from the widely used non-constrained notion of width.
arXiv Detail & Related papers (2020-07-07T17:28:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.