Ecmas: Efficient Circuit Mapping and Scheduling for Surface Code
- URL: http://arxiv.org/abs/2312.15254v1
- Date: Sat, 23 Dec 2023 13:27:59 GMT
- Title: Ecmas: Efficient Circuit Mapping and Scheduling for Surface Code
- Authors: Mingzheng Zhu, Hao Fu, Jun Wu, Chi Zhang, Wei Xie, Xiang-Yang Li
- Abstract summary: We study the surface code mapping and scheduling problem.
To reduce the execution time of a quantum circuit, we first introduce two novel metrics.
Ecmas can dramatically reduce the execution time in both double defect and lattice surgery models.
- Score: 20.03248840966205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the leading candidate of quantum error correction codes, surface code
suffers from significant overhead, such as execution time. Reducing the
circuit's execution time not only enhances its execution efficiency but also
improves fidelity. However, finding the shortest execution time is NP-hard.
In this work, we study the surface code mapping and scheduling problem. To
reduce the execution time of a quantum circuit, we first introduce two novel
metrics: Circuit Parallelism Degree and Chip Communication Capacity to
quantitatively characterize quantum circuits and chips. Then, we propose a
resource-adaptive mapping and scheduling method, named Ecmas, with customized
initialization of chip resources for each circuit. Ecmas can dramatically
reduce the execution time in both double defect and lattice surgery models.
Furthermore, we provide an additional version Ecmas-ReSu for sufficient qubits,
which is performance-guaranteed and more efficient. Extensive numerical tests
on practical datasets show that Ecmas outperforms the state-of-the-art methods
by reducing the execution time by 51.5% on average for double defect model.
Ecmas can reach the optimal result in most benchmarks, reducing the execution
time by up to 13.9% for lattice surgery model.
Related papers
- Rectified Sparse Attention [61.7702154360081]
Efficient long-sequence generation is a critical challenge for Large Language Models.<n>We propose Rectified Sparse Attention (ReSA), a simple yet effective method that combines block-sparse attention with periodic dense rectification.<n> Experiments across math reasoning, language modeling, and retrieval tasks demonstrate that ReSA achieves near-lossless generation quality.
arXiv Detail & Related papers (2025-06-04T16:01:48Z) - Fast correlated decoding of transversal logical algorithms [67.01652927671279]
Quantum error correction (QEC) is required for large-scale computation, but incurs a significant resource overhead.<n>Recent advances have shown that by jointly decoding logical qubits in algorithms composed of logical gates, the number of syndrome extraction rounds can be reduced.<n>Here, we reform the problem of decoding circuits by directly decoding relevant logical operator products as they propagate through the circuit.
arXiv Detail & Related papers (2025-05-19T18:00:00Z) - Exploration of Design Alternatives for Reducing Idle Time in Shor's Algorithm: A Study on Monolithic and Distributed Quantum Systems [4.430488261124667]
We introduce an alternating design approach to minimize idle time while preserving qubit efficiency in Shor's algorithm.
We also demonstrate how task rearrangement enhances execution efficiency in the presence of multiple distribution channels.
Our findings provide a structured framework for optimizing compiled quantum circuits for Shor's algorithm.
arXiv Detail & Related papers (2025-03-28T16:07:52Z) - QuartDepth: Post-Training Quantization for Real-Time Depth Estimation on the Edge [55.75103034526652]
We propose QuartDepth which adopts post-training quantization to quantize MDE models with hardware accelerations for ASICs.
Our approach involves quantizing both weights and activations to 4-bit precision, reducing the model size and computation cost.
We design a flexible and programmable hardware accelerator by supporting kernel fusion and customized instruction programmability.
arXiv Detail & Related papers (2025-03-20T21:03:10Z) - Demonstrating dynamic surface codes [138.1740645504286]
We experimentally demonstrate three time-dynamic implementations of the surface code.
First, we embed the surface code on a hexagonal lattice, reducing the necessary couplings per qubit from four to three.
Second, we walk a surface code, swapping the role of data and measure qubits each round, achieving error correction with built-in removal of accumulated non-computational errors.
Third, we realize the surface code using iSWAP gates instead of the traditional CNOT, extending the set of viable gates for error correction without additional overhead.
arXiv Detail & Related papers (2024-12-18T21:56:50Z) - Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Resource-efficient context-aware dynamical decoupling embedding for arbitrary large-scale quantum algorithms [0.0]
GraphDD is an efficient method for circuit-specific, optimal embedding of dynamical decoupling (DD) into executable quantum algorithms.
We demonstrate that GraphDD refocuses both quasi-static single-qubit dephasing and crosstalk idling errors over the entire circuit.
We verify the ability of GraphDD to deliver enhanced circuit-level error suppression on 127-qubit IBM devices.
arXiv Detail & Related papers (2024-09-09T18:01:33Z) - Finding Transformer Circuits with Edge Pruning [71.12127707678961]
We propose Edge Pruning as an effective and scalable solution to automated circuit discovery.
Our method finds circuits in GPT-2 that use less than half the number of edges compared to circuits found by previous methods.
Thanks to its efficiency, we scale Edge Pruning to CodeLlama-13B, a model over 100x the scale that prior methods operate on.
arXiv Detail & Related papers (2024-06-24T16:40:54Z) - PreRoutGNN for Timing Prediction with Order Preserving Partition: Global
Circuit Pre-training, Local Delay Learning and Attentional Cell Modeling [84.34811206119619]
We propose a two-stage approach to pre-routing timing prediction.
First, we propose global circuit training to pre-train a graph auto-encoder that learns the global graph embedding from circuit netlist.
Second, we use a novel node updating scheme for message passing on GCN, following the topological sorting sequence of the learned graph embedding and circuit graph.
Experiments on 21 real world circuits achieve a new SOTA R2 of 0.93 for slack prediction, which is significantly surpasses 0.59 by previous SOTA method.
arXiv Detail & Related papers (2024-02-27T02:23:07Z) - Fast, Scalable, Warm-Start Semidefinite Programming with Spectral
Bundling and Sketching [53.91395791840179]
We present Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and scalable algorithm for solving massive SDPs.
USBS provides a 500x speed-up over the state-of-the-art scalable SDP solver on an instance with over 2 billion decision variables.
arXiv Detail & Related papers (2023-12-19T02:27:22Z) - COGNAC: Circuit Optimization via Gradients and Noise-Aware Compilation [0.29998889086656577]
We present COGNAC, a novel strategy for compiling quantum circuits.
We use a simple noise model informed by the duration entangling gates.
We reduce a circuit's gate count without the need for a large number of explicit elimination rewrite rules.
arXiv Detail & Related papers (2023-11-05T20:59:27Z) - QUIK: Towards End-to-End 4-Bit Inference on Generative Large Language
Models [57.04178959678024]
We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits.
We achieve this via a hybrid quantization strategy called QUIK, which compresses most of the weights and activations to 4-bit.
We provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x.
arXiv Detail & Related papers (2023-10-13T17:15:05Z) - Distributed Scheduling of Quantum Circuits with Noise and Time
Optimization [0.6869438083004812]
We propose a scheduler that finds the optimum schedule for the subcircuits obtained by circuit cutting on the available set of hardware.
The fidelity obtained by this method on various benchmark circuits is significantly better than that of the uncut circuit executed on the least noisy device.
arXiv Detail & Related papers (2023-09-12T07:02:21Z) - Improving Quantum Circuit Synthesis with Machine Learning [0.7894596908025954]
We show how applying machine learning to unitary datasets permits drastic speedups for synthesis algorithms.
This paper presents QSeed, a seeded synthesis algorithm that employs a learned model to quickly propose resource efficient circuit implementations of unitaries.
arXiv Detail & Related papers (2023-06-09T01:53:56Z) - Efficient algorithms to solve atom reconfiguration problems. I. The
redistribution-reconfiguration (red-rec) algorithm [51.02512563152503]
We numerically quantify the performance of the red-rec algorithm, both in the absence and in the presence of loss.
We show that the number of traps required to prepare a compact-centered configuration of atoms on a grid with a mean success probability of one half scales as the 3/2 power of the number of desired atoms.
The red-rec algorithm admits an efficient implementation that can readily be deployed on real-time control systems.
arXiv Detail & Related papers (2022-12-07T19:00:01Z) - Qubit-reuse compilation with mid-circuit measurement and reset [0.0]
We introduce the idea of qubit-reuse compilation, which takes as input a quantum circuit and produces as output a compiled circuit.
We show that optimal qubit-reuse compilation requires the same number of qubits to execute a circuit as its dual.
We experimentally realize an 80-qubit QAOA MaxCut circuit on the 20-qubit Quantinuum H1-1 trapped ion quantum processor.
arXiv Detail & Related papers (2022-10-14T18:11:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.