The random coupled-plaquette gauge model and the surface code under circuit-level noise
- URL: http://arxiv.org/abs/2412.14004v1
- Date: Wed, 18 Dec 2024 16:20:14 GMT
- Title: The random coupled-plaquette gauge model and the surface code under circuit-level noise
- Authors: Manuel Rispler, Davide Vodola, Markus Müller, Seyong Kim,
- Abstract summary: We optimally account for genuine Y-errors in the surface code in a setting with noisy measurements.
We tackle the circuit-level noise scenario, where we use a reduction technique to find effective asymmetric depolarizing and syndrome noise rates.
- Score: 1.351813974961217
- License:
- Abstract: We map the decoding problem of the surface code under depolarizing and syndrome noise to a disordered spin model, which we call the random coupled-plaquette gauge model (RCPGM). By coupling X- and Z-syndrome volumes, this model allows us to optimally account for genuine Y-errors in the surface code in a setting with noisy measurements. Using Parallel Tempering Monte Carlo simulations, we determine the code's fundamental error threshold. Firstly, for the phenomenological noise setting we determine a threshold of $6\%$ under uniform depolarizing and syndrome noise. This is a substantial improvement compared to results obtained via the previously known "uncoupled" random plaquette gauge model (RPGM) in the identical setting, where marginalizing Y-errors leads to a threshold of $4.3\%$. Secondly, we tackle the circuit-level noise scenario, where we use a reduction technique to find effective asymmetric depolarizing and syndrome noise rates to feed into the RCPGM mapping. Despite this reduction technique breaking up some of the correlations contained in the intricacies of circuit-level noise, we find an improvement exceeding that for the phenomenological case. We report a threshold of up to $1.4\%$, to be compared to $0.7\%$ under the identical noise model when marginalizing the Y-errors and mapping to the anisotropic RPGM. These results enlarge the landscape of statistical mechanical mappings for quantum error correction. In particular they provide an underpinning for the broadly held belief that accounting for Y-errors is a major bottleneck in improving surface code decoders. This is highly encouraging for leading efficient practical decoder development, where heuristically accounting for Y-error correlations has seen recent developments such as belief-matching. This suggests that there is further room for improvement of the surface code for fault-tolerant quantum computation.
Related papers
- Fundamental thresholds for computational and erasure errors via the coherent information [1.4767596539913115]
We propose a framework based on the coherent information (CI) of the mixed-state density operator associated to noisy QEC codes.
We show how to rigorously derive different families of statistical mechanics mappings for generic stabilizer QEC codes in the presence of both types of errors.
arXiv Detail & Related papers (2024-12-21T18:30:30Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - DGR: Tackling Drifted and Correlated Noise in Quantum Error Correction via Decoding Graph Re-weighting [14.817445452647588]
We propose an efficient decoding graph edge re-weighting strategy with no quantum overhead.
DGR reduces the logical error rate by 3.6x on average-case noise mismatch with exceeding 5000x improvement under worst-case mismatch.
arXiv Detail & Related papers (2023-11-27T18:26:16Z) - Performance of surface codes in realistic quantum hardware [0.24466725954625884]
Surface codes are generally studied based on the assumption that each of the qubits that make up the surface code lattice suffers noise that is independent and identically distributed (i.i.d.)
We introduce independent non-identically distributed (i.ni.d.) noise model, a decoherence model that accounts for the non-uniform behaviour of the docoherence parameters of qubits.
We consider and describe two methods which enhance the performance of planar codes under i.ni.d. noise.
arXiv Detail & Related papers (2022-03-29T15:57:23Z) - Performance of teleportation-based error correction circuits for bosonic
codes with noisy measurements [58.720142291102135]
We analyze the error-correction capabilities of rotation-symmetric codes using a teleportation-based error-correction circuit.
We find that with the currently achievable measurement efficiencies in microwave optics, bosonic rotation codes undergo a substantial decrease in their break-even potential.
arXiv Detail & Related papers (2021-08-02T16:12:13Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Modeling and mitigation of cross-talk effects in readout noise with
applications to the Quantum Approximate Optimization Algorithm [0.0]
Noise mitigation can be performed up to some error for which we derive upper bounds.
Experiments on 15 (23) qubits using IBM's devices to test both the noise model and the error-mitigation scheme.
We show that similar effects are expected for Haar-random quantum states and states generated by shallow-depth random circuits.
arXiv Detail & Related papers (2021-01-07T02:19:58Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial
Attacks [86.88061841975482]
We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle.
We use this setting to find fast one-step adversarial attacks, akin to a black-box version of the Fast Gradient Sign Method(FGSM)
We show that the method uses fewer queries and achieves higher attack success rates than the current state of the art.
arXiv Detail & Related papers (2020-10-08T18:36:51Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z) - Enhanced noise resilience of the surface-GKP code via designed bias [0.0]
We study the code obtained by concatenating the standard single-mode Gottesman-Kitaev-Preskill (GKP) code with the surface code.
We show that the noise tolerance of this surface-GKP code with respect to (Gaussian) displacement errors improves when a single-mode squeezing unitary is applied to each mode.
arXiv Detail & Related papers (2020-04-01T16:08:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.