Leveraging Zero-Level Distillation to Generate High-Fidelity Magic States
- URL: http://arxiv.org/abs/2404.09740v1
- Date: Mon, 15 Apr 2024 12:42:29 GMT
- Title: Leveraging Zero-Level Distillation to Generate High-Fidelity Magic States
- Authors: Yutaka Hirano, Tomohiro Itogawa, Keisuke Fujii,
- Abstract summary: We evaluate the spatial temporal overhead of two-level distillation implementations generating relatively high-fidelity magic states.
We refine the second-level 15-to-1 implementation in it to capitalize on the small footprint of zero-level distillation.
- Score: 0.8009842832476994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magic state distillation plays an important role in universal fault-tolerant quantum computing, and its overhead is one of the major obstacles to realizing fault-tolerant quantum computers. Hence, many studies have been conducted to reduce this overhead. Among these, Litinski has provided a concrete assessment of resource-efficient distillation protocol implementations on the rotated surface code. On the other hand, recently, Itogawa et al. have proposed zero-level distillation, a distillation protocol offering very small spatial and temporal overhead to generate relatively low-fidelity magic states. While zero-level distillation offers preferable spatial and temporal overhead, it cannot directly generate high-fidelity magic states since it only reduces the logical error rate of the magic state quadratically. In this study, we evaluate the spatial and temporal overhead of two-level distillation implementations generating relatively high-fidelity magic states, including ones incorporating zero-level distillation. To this end, we introduce (0+1)-level distillation, a two-level distillation protocol which combines zero-level distillation and the 15-to-1 distillation protocol. We refine the second-level 15-to-1 implementation in it to capitalize on the small footprint of zero-level distillation. Under conditions of a physical error probability of $p_{\mathrm{phys}} = 10^{-4}$ ($10^{-3}$) and targeting an error rate for the magic state within $[5 \times 10^{-17}, 10^{-11}]$ ($[5 \times 10^{-11}, 10^{-8}]$), (0+1)-level distillation reduces the spatiotemporal overhead by more than 63% (61%) compared to the (15-to-1)$\times$(15-to-1) protocol and more than 43% (44%) compared to the (15-to-1)$\times$(20-to-4) protocol, offering a substantial efficiency gain over the traditional protocols.
Related papers
- Magic state cultivation on a superconducting quantum processor [108.15404500422814]
We present an experimental study of magic state cultivation on a superconducting quantum processor.<n>Cultivation reduces the error by a factor of 40, with a state fidelity of 0.9999(1).
arXiv Detail & Related papers (2025-12-15T21:29:40Z) - Efficient magic state cultivation with lattice surgery [2.6945797019995363]
Magic state distillation plays a crucial role in fault-tolerant quantum computation.<n>Traditional logical-level distillation offers significant overhead reduction by enabling direct implementation with physical gates.<n>Magic state cultivation is a state-of-the-art physical-level distillation protocol that is compatible with the square-grid connectivity.
arXiv Detail & Related papers (2025-10-28T16:44:34Z) - Random Party Distillation on a Superconducting Processor [42.10607028572284]
We propose a qubit-based implementation of a random party distillation protocol.<n>We demonstrate its efficacy on the superconducting hardware device, ibm_quebec.
arXiv Detail & Related papers (2025-08-12T17:44:11Z) - Unfolded distillation: very low-cost magic state preparation for biased-noise qubits [1.8749305679160366]
We propose a low-cost magic state distillation scheme for biased-noise qubits.<n>The logical fidelity remains nearly identical even at a more modest noise bias of $eta gtrsim 80$.<n>Our construction is based on unfolding the $X$ stabilizer group of the Hadamard 3D quantum Reed-Muller code in 2D.
arXiv Detail & Related papers (2025-07-16T18:00:00Z) - MGD$^3$: Mode-Guided Dataset Distillation using Diffusion Models [50.2406741245418]
We propose a mode-guided diffusion model leveraging a pre-trained diffusion model.<n>Our approach addresses dataset diversity in three stages: Mode Discovery to identify distinct data modes, Mode Guidance to enhance intra-class diversity, and Stop Guidance to mitigate artifacts in synthetic samples.<n>Our method eliminates the need for fine-tuning diffusion models with distillation losses, significantly reducing computational costs.
arXiv Detail & Related papers (2025-05-25T03:40:23Z) - High-fidelity gates in a transmon using bath engineering for passive leakage reset [65.46249968484794]
Leakage, the occupation of any state not used in the computation, is one of the most devastating errors in quantum error correction.
We demonstrate a device which reduces the lifetimes of the leakage states in the transmon by three orders of magnitude.
arXiv Detail & Related papers (2024-11-06T18:28:49Z) - Constant-time magic state distillation [0.0]
We show that, with a re-configurable qubit architecture, we can perform fast, $mathcalO(1)$ code cycles magic state distillation.
We confirm the error suppression ability of both distillation circuits, from input error rate $prightarrow mathcalO(p3)$ under circuit-level noise.
arXiv Detail & Related papers (2024-10-23T16:08:28Z) - Surpassing the fundamental limits of distillation with catalysts [2.107610564835429]
We show that quantum catalysts can help surpass previously known fundamental limitations on distillation overhead.
In particular, in context of magic state distillation, our result indicates that the code-based low-overhead distillation protocols can be promoted to the one-shot setting.
We demonstrate that enables a spacetime trade-off between overhead and success probability.
arXiv Detail & Related papers (2024-10-18T15:41:52Z) - Presto! Distilling Steps and Layers for Accelerating Music Generation [49.34961693154768]
Presto! is an approach to inference acceleration for score-based diffusion transformers.
We develop a new score-based distribution matching distillation (DMD) method for the EDM-family of diffusion models.
To reduce the cost per step, we develop a simple, but powerful improvement to a recent layer distillation method.
arXiv Detail & Related papers (2024-10-07T16:24:18Z) - Low-overhead magic state distillation with color codes [1.3980986259786223]
Fault-tolerant implementation of non-Clifford gates is a major challenge for achieving universal fault-tolerant quantum computing.
We propose two distillation schemes based on the 15-to-1 distillation circuit and lattice surgery, which differ in their methods for handling faulty rotations.
arXiv Detail & Related papers (2024-09-12T02:20:17Z) - Even more efficient magic state distillation by zero-level distillation [0.8009842832476994]
We propose zero-level distillation, which prepares a high fidelity logical magic state using physical qubits on a square lattice.
The key idea behind is using the Steane code to distill a logical magic state by using noisy Clifford gates with error detection.
arXiv Detail & Related papers (2024-03-06T19:01:28Z) - Fast Flux-Activated Leakage Reduction for Superconducting Quantum
Circuits [84.60542868688235]
leakage out of the computational subspace arising from the multi-level structure of qubit implementations.
We present a resource-efficient universal leakage reduction unit for superconducting qubits using parametric flux modulation.
We demonstrate that using the leakage reduction unit in repeated weight-two stabilizer measurements reduces the total number of detected errors in a scalable fashion.
arXiv Detail & Related papers (2023-09-13T16:21:32Z) - Distill Gold from Massive Ores: Bi-level Data Pruning towards Efficient Dataset Distillation [96.92250565207017]
We study the data efficiency and selection for the dataset distillation task.
By re-formulating the dynamics of distillation, we provide insight into the inherent redundancy in the real dataset.
We find the most contributing samples based on their causal effects on the distillation.
arXiv Detail & Related papers (2023-05-28T06:53:41Z) - ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self
On-the-fly Distillation for Dense Passage Retrieval [54.54667085792404]
We propose a novel distillation method that significantly advances cross-architecture distillation for dual-encoders.
Our method 1) introduces a self on-the-fly distillation method that can effectively distill late interaction (i.e., ColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation process to further improve the performance with a cross-encoder teacher.
arXiv Detail & Related papers (2022-05-18T18:05:13Z) - Rapid generation of all-optical $^{39}$K Bose-Einstein condensates using
a low-field Feshbach resonance [58.720142291102135]
We investigate the production of all-optical $39$K Bose-Einstein condensates with different scattering lengths using a Feshbach resonance near $33$ G.
We are able to produce fully condensed ensembles with $5.8times104$ atoms within $850$ ms evaporation time at a scattering length of $232.
Based on our findings we describe routes towards high-flux sources of ultra-cold potassium for inertial sensing.
arXiv Detail & Related papers (2022-01-12T16:39:32Z) - High-Fidelity Magic-State Preparation with a Biased-Noise Architecture [2.624902795082451]
Magic state distillation is a resource intensive subroutine that consumes noisy input states to produce high-fidelity resource states.
We propose an error-detecting code which detects the dominant errors that occur during state preparation.
Our approach promises considerable savings in overheads with near-term technology.
arXiv Detail & Related papers (2021-09-06T18:02:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.