Quantum Error Correction and Dynamical Decoupling: Better Together or Apart?
- URL: http://arxiv.org/abs/2602.19042v1
- Date: Sun, 22 Feb 2026 04:35:26 GMT
- Title: Quantum Error Correction and Dynamical Decoupling: Better Together or Apart?
- Authors: Victor Kasatkin, Mario Morford-Oberst, Arian Vezvaee, Daniel A. Lidar,
- Abstract summary: Quantum error correction (QEC) and dynamical decoupling (DD) are tools for protecting quantum information.<n>We analyze a hybrid memory cycle where DD is implemented logically (LDD) using normalizer elements of an $[n,k,d]]$ stabilizer code.<n>We show how LDD suppresses at least one minimum-weight uncorrectable Pauli error for the chosen recovery map.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Quantum error correction (QEC) and dynamical decoupling (DD) are tools for protecting quantum information. A natural goal is to combine them to outperform either approach alone. Such a benefit is not automatic: physical DD can conflict with an encoded subspace, and QEC performance is governed by the errors that survive decoding, not necessarily those DD suppresses. We analyze a hybrid memory cycle where DD is implemented logically (LDD) using normalizer elements of an $[[n,k,d]]$ stabilizer code, followed by a round of syndrome measurement and recovery (or, in the detection setting, postselection on a trivial syndrome). In an effective Pauli model with physical error probability $p$, LDD suppression factor $p_{DD}$, and recovery imperfection rate $p_{QEC}$ (or $p_{QED}$), we derive closed-form entanglement-fidelity expressions for QEC-only, LDD-only, physical DD, and the hybrid LDD+QEC protocol. The formulas are expressed via a small set of code-dependent weight enumerator polynomials, making the role of the decoder and the LDD group explicit. For ideal recovery LDD+QEC outperforms QEC-only iff the conditional fraction of uncorrectable Pauli errors is larger in the LDD-suppressed sector than in the unsuppressed sector. In the low-noise regime, a sufficient design rule guaranteeing hybrid advantage is that LDD suppresses at least one minimum-weight uncorrectable Pauli error for the chosen recovery map. We show how stabilizer-equivalent choices of LDD generators can be used to enforce this condition. We supplement our analysis with numerical results for the $[[7,1,3]]$ Steane code and a $[[13,1,3]]$ code, mapping regions of hybrid-protocol advantage in parameter space beyond the small-$p$ regime. Our work illustrates the need for co-design of the code, decoder, and logical decoupling group, and clarifies the conditions under which the hybrid LDD+QEC protocol is advantageous.
Related papers
- Think Dense, Not Long: Dynamic Decoupled Conditional Advantage for Efficient Reasoning [32.70499084074494]
We propose Dynamic Decoupled Advantage (DDCA) to decouple efficiency optimization from correctness.<n>Experiments on GSM8K, MATH500, AMC23, and AIME25 show that DDCA consistently improves the efficiency--accuracy trade-off relative to adaptive baselines.
arXiv Detail & Related papers (2026-02-02T13:43:52Z) - INC: An Indirect Neural Corrector for Auto-Regressive Hybrid PDE Solvers [61.84396402100827]
We propose the Indirect Neural Corrector ($mathrmINC$), which integrates learned corrections into the governing equations.<n>$mathrmINC$ reduces the error amplification on the order of $t-1 + L$, where $t$ is the timestep and $L$ the Lipschitz constant.<n>We test $mathrmINC$ in extensive benchmarks, covering numerous differentiable solvers, neural backbones, and test cases ranging from a 1D chaotic system to 3D turbulence.
arXiv Detail & Related papers (2025-11-16T20:14:28Z) - Surface code scaling on heavy-hex superconducting quantum processors [36.94429692322632]
We demonstrate subthreshold scaling of a surface-code quantum memory on hardware whose native connectivity does not match the code.<n>We show that DD plays a major role: it suppresses coherent ZZ crosstalk and non-Markovian dephasing.<n>We derive an entanglement fidelity metric that is computed directly from X- and Z-basis logical-error data and provides per-cycle, SPAM-aware bounds.
arXiv Detail & Related papers (2025-10-21T17:37:40Z) - Exploiting Discriminative Codebook Prior for Autoregressive Image Generation [54.14166700058777]
token-based autoregressive image generation systems first tokenize images into sequences of token indices with a codebook, and then model these sequences in an autoregressive paradigm.<n>While autoregressive generative models are trained only on index values, the prior encoded in the codebook, which contains rich token similarity information, is not exploited.<n>Recent studies have attempted to incorporate this prior by performing naive k-means clustering on the tokens, helping to facilitate the training of generative models with a reduced codebook.<n>We propose the Discriminative Codebook Prior Extractor (DCPE) as an alternative to k-means
arXiv Detail & Related papers (2025-08-14T15:00:00Z) - Demonstration of High-Fidelity Entangled Logical Qubits using Transmons [0.0]
We propose and implement a method that leverages dynamical decoupling (DD) to drastically suppress logical errors.<n>The resulting hybrid QEC-LDD strategy is in principle capable of handling arbitrary weight errors.<n>We present a method that allows for the detection of logical errors affecting encoded Bell states, which, in this case, arise primarily from crosstalk among physical qubits.
arXiv Detail & Related papers (2025-03-18T17:47:08Z) - Efficient Approximate Degenerate Ordered Statistics Decoding for Quantum Codes via Reliable Subset Reduction [5.625796693054094]
We introduce the concept of approximate degenerate decoding and integrate it with ordered statistics decoding (OSD)<n>We present an ADOSD algorithm that significantly improves OSD efficiency in the code capacity noise model.
arXiv Detail & Related papers (2024-12-30T17:45:08Z) - Exploiting Pre-trained Models for Drug Target Affinity Prediction with Nearest Neighbors [58.661454334877256]
Drug-Target binding Affinity (DTA) prediction is essential for drug discovery.
Despite the application of deep learning methods to DTA prediction, the achieved accuracy remain suboptimal.
We propose $k$NN-DTA, a non-representation embedding-based retrieval method adopted on a pre-trained DTA prediction model.
arXiv Detail & Related papers (2024-07-21T15:49:05Z) - Conservative DDPG -- Pessimistic RL without Ensemble [48.61228614796803]
DDPG is hindered by the overestimation bias problem.
Traditional solutions to this bias involve ensemble-based methods.
We propose a straightforward solution using a $Q$-target and incorporating a behavioral cloning (BC) loss penalty.
arXiv Detail & Related papers (2024-03-08T23:59:38Z) - Approximate Autonomous Quantum Error Correction with Reinforcement
Learning [4.015029887580199]
Autonomous quantum error correction (AQEC) protects logical qubits by engineered dissipation.
Bosonic code spaces, where single-photon loss represents the dominant source of error, are promising candidates for AQEC.
We propose a bosonic code for approximate AQEC by relaxing the Knill-Laflamme conditions.
arXiv Detail & Related papers (2022-12-22T12:42:52Z) - Training \beta-VAE by Aggregating a Learned Gaussian Posterior with a
Decoupled Decoder [0.553073476964056]
Current practices in VAE training often result in a trade-off between the reconstruction fidelity and the continuity$/$disentanglement of the latent space.
We present intuitions and a careful analysis of the antagonistic mechanism of the two losses, and propose a simple yet effective two-stage method for training a VAE.
We evaluate the method using a medical dataset intended for 3D skull reconstruction and shape completion, and the results indicate promising generative capabilities of the VAE trained using the proposed method.
arXiv Detail & Related papers (2022-09-29T13:49:57Z) - DPO: Dynamic-Programming Optimization on Hybrid Constraints [26.704502486686128]
In Bayesian inference, the most probable explanation (MPE) problem requests a variable instantiation with the highest probability given some evidence.
It is known that Boolean MPE can be solved via reduction to (weighted partial) MaxSAT.
We build on DPMC and introduce DPO, a dynamic-programming that exactly solves MPE.
arXiv Detail & Related papers (2022-05-17T21:18:54Z) - Highly Parallel Autoregressive Entity Linking with Discriminative
Correction [51.947280241185]
We propose a very efficient approach that parallelizes autoregressive linking across all potential mentions.
Our model is >70 times faster and more accurate than the previous generative method.
arXiv Detail & Related papers (2021-09-08T17:28:26Z) - Projection-free Graph-based Classifier Learning using Gershgorin Disc
Perfect Alignment [59.87663954467815]
In graph-based binary learning, a subset of known labels $hatx_i$ are used to infer unknown labels.
When restricting labels $x_i$ to binary values, the problem is NP-hard.
We propose a fast projection-free method by solving a sequence of linear programs (LP) instead.
arXiv Detail & Related papers (2021-06-03T07:22:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.