Parameter Analysis and Optimization of Layer Fidelity for Quantum Processor Benchmarking at Scale
- URL: http://arxiv.org/abs/2510.16915v1
- Date: Sun, 19 Oct 2025 16:18:26 GMT
- Title: Parameter Analysis and Optimization of Layer Fidelity for Quantum Processor Benchmarking at Scale
- Authors: Maria Jose Lozano Palacio, Hasan Nayfeh, Matthew Ware, David C. McKay,
- Abstract summary: Layer fidelity is a benchmark well-suited to assessing processor performance at scale.<n>We extend the analysis of the original layer fidelity manuscript to optimize parameters of the benchmark.
- Score: 0.03499870393443267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the continued scaling of quantum processors, holistic benchmarks are essential for extensively evaluating device performance. Layer fidelity is a benchmark well-suited to assessing processor performance at scale. Key advantages of this benchmark include its natural alignment with randomized benchmarking (RB) procedures, crosstalk awareness, fast measurements over large numbers of qubits, high signal-to-noise ratio, and fine-grained information. In this work, we extend the analysis of the original layer fidelity manuscript to optimize parameters of the benchmark and extract deeper insights of its application. We present a robust protocol for identifying optimal qubit chains of length N, demonstrating that our method yields error per layered gate (EPLG) values 40%-70% lower than randomly selected chains. We further establish layer fidelity as an effective performance monitoring tool, capturing both edge-localized and device-wide degradation by tracking optimal chains of length 50 and 100, and fixed chains of length 100. Additionally, we refine error analysis by proposing parameter bounds on the number of randomizations and Clifford lengths used in direct RB fits, minimizing fit uncertainties. Finally, we analyze the impact of varying gate durations on layer fidelity measurements, showing that prolonged gate times leading to idling times significantly increase layered two-qubit (2Q) errors on Eagle R3 processors. Notably, we observe a 95% EPLG increase on a fixed chain in an Eagle R3 processor when some gate durations are extended by 65%. These findings extend the applicability of the layer fidelity benchmark and provide practical guidelines for optimizing quantum processor evaluations.
Related papers
- OptiQKD: A Machine Learning-Optimized Framework for Real-Time Parameter Tuning in Quantum Key Distribution [0.0]
We propose OptiQKD, a protocol-agnostic machine learning framework specifically engineered to maximize the Secure Key Rate (SKR) and minimize the Quantum Bit Error Rate (QBER) for the BB84, E91, and COW protocols.<n>We evaluate the framework by simulating critical environmental stressors, including depolarizing and amplitude-damping noise, under realistic device constraints.
arXiv Detail & Related papers (2026-03-04T15:43:31Z) - Millisecond-Scale Calibration and Benchmarking of Superconducting Qubits [0.001970303609484344]
We demonstrate an on-FPGA workflow that co-locates pulse generation, data acquisition, analysis, and feed-forward, eliminating CPU round trips.<n>Within this workflow, we introduce sparse-sampling and on-FPGA inference tools, including computationally efficient methods for estimation of exponential and sine-like response functions.<n>These methods enable low-latency primitives for readout calibration, spectroscopy, pulse-amplitude calibration, coherence estimation, and benchmarking.
arXiv Detail & Related papers (2026-02-12T13:08:22Z) - Spectral Gating Networks [65.9496901693099]
We introduce Spectral Gating Networks (SGN) to introduce frequency-rich expressivity in feed-forward networks.<n>SGN augments a standard activation pathway with a compact spectral pathway and learnable gates that allow the model to start from a stable base behavior.<n>It consistently improves accuracy-efficiency trade-offs under comparable computational budgets.
arXiv Detail & Related papers (2026-02-07T20:00:49Z) - Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models [97.55009021098554]
This work aims to identify the key determinants of SLMs' real-device latency and offer generalizable principles and methodologies for SLM design and training.<n>We introduce a new family of hybrid SLMs, called Nemotron-Flash, which significantly advances the accuracy-efficiency frontier of state-of-the-art SLMs.
arXiv Detail & Related papers (2025-11-24T08:46:36Z) - Exploring Spiking Neural Networks for Binary Classification in Multivariate Time Series at the Edge [0.9282545044546486]
We present a general framework for training spiking neural networks (SNNs) to perform binary classification on multivariate time series.<n>We apply it to the task of detecting low signal-to-noise ratio radioactive sources in gamma-ray spectral data.<n>The resulting SNNs, with as few as 49 neurons and 66 synapses, achieve a 51.8% true positive rate (TPR) at a false alarm rate of 1/hr.<n> Hardware deployment on the microCaspian neuromorphic platform demonstrates 2mW power consumption and 20.2ms latency.
arXiv Detail & Related papers (2025-10-23T20:52:11Z) - Causal-Guided Dimension Reduction for Efficient Pareto Optimization [2.9013001432962255]
CaDRO builds a causal map through a hybrid observational-interventional process, ranking parameters by their causal effect on the objectives.<n>Low-impact parameters are fixed to values from high-quality solutions, while critical drivers remain active in the search.<n>Across amplifiers, regulators, and RF circuits, CaDRO converges up to 10$times$ faster than NSGA-II.
arXiv Detail & Related papers (2025-10-11T00:41:04Z) - Calibrating quantum gates up to 52 qubits in a superconducting processor [16.83020919407806]
We benchmark gate fidelities up to 52 qubits using character-average benchmarking protocol.<n>We enhance the fidelity of a 6-qubit parallel CZ gate from 87.65% to 92.04% and decrease the gate correlation from 3.53% to 3.22%.
arXiv Detail & Related papers (2025-05-28T14:17:00Z) - Optimizing Retrieval-Augmented Generation: Analysis of Hyperparameter Impact on Performance and Efficiency [1.6177972328875518]
Large language models achieve high task performance yet often hallucinate or rely on outdated knowledge.<n>Retrieval-augmented generation (RAG) addresses these gaps by coupling generation with external search.
arXiv Detail & Related papers (2025-05-13T11:13:27Z) - Pushing the Limits of Low-Bit Optimizers: A Focus on EMA Dynamics [64.62231094774211]
Statefuls (e.g., Adam) maintain auxiliary information even 2x the model size in order to achieve optimal convergence.<n>SOLO enables Adam-styles to maintain quantized states with precision as low as 3 bits, or even 2 bits.<n>SOLO can thus be seamlessly applied to Adam-styles, leading to substantial memory savings with minimal accuracy loss.
arXiv Detail & Related papers (2025-05-01T06:47:45Z) - On the Convergence of DP-SGD with Adaptive Clipping [56.24689348875711]
Gradient Descent with gradient clipping is a powerful technique for enabling differentially private optimization.<n>This paper provides the first comprehensive convergence analysis of SGD with quantile clipping (QC-SGD)<n>We show how QC-SGD suffers from a bias problem similar to constant-threshold clipped SGD but can be mitigated through a carefully designed quantile and step size schedule.
arXiv Detail & Related papers (2024-12-27T20:29:47Z) - On-Chip Hardware-Aware Quantization for Mixed Precision Neural Networks [52.97107229149988]
We propose an On-Chip Hardware-Aware Quantization framework, performing hardware-aware mixed-precision quantization on deployed edge devices.
For efficiency metrics, we built an On-Chip Quantization Aware pipeline, which allows the quantization process to perceive the actual hardware efficiency of the quantization operator.
For accuracy metrics, we propose Mask-Guided Quantization Estimation technology to effectively estimate the accuracy impact of operators in the on-chip scenario.
arXiv Detail & Related papers (2023-09-05T04:39:34Z) - Design and execution of quantum circuits using tens of superconducting qubits and thousands of gates for dense Ising optimization problems [12.220619768140903]
We develop a hardware-efficient ansatz for variational optimization, derived from existing ansatze in the literature, that parametrizes subsets of all interactions in the Cost Hamiltonian in each layer.
We report performance significantly better than using a random guess oracle for circuits involving up to approx 5000 two-qubit and approx 5000 one-qubit native gates.
arXiv Detail & Related papers (2023-08-18T02:36:38Z) - Quantized Neural Networks for Low-Precision Accumulation with Guaranteed
Overflow Avoidance [68.8204255655161]
We introduce a quantization-aware training algorithm that guarantees avoiding numerical overflow when reducing the precision of accumulators during inference.
We evaluate our algorithm across multiple quantized models that we train for different tasks, showing that our approach can reduce the precision of accumulators while maintaining model accuracy with respect to a floating-point baseline.
arXiv Detail & Related papers (2023-01-31T02:46:57Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through
Regularized Self-Attention [48.697458429460184]
Two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.
This paper proposes a well-designed model named ERNIE-Sparse.
It consists of two distinctive parts: (i) Hierarchical Sparse Transformer (HST) to sequentially unify local and global information, and (ii) Self-Attention Regularization (SAR) to minimize the distance for transformers with different attention topologies.
arXiv Detail & Related papers (2022-03-23T08:47:01Z) - Sampling Strategy Optimization for Randomized Benchmarking [4.7362989868031855]
benchmarking (RB) is a widely used method for estimating the average fidelity of gates implemented on a quantum computing device.
We propose a method for fully optimizing an RB configuration so that the confidence interval of the estimated fidelity is minimized.
arXiv Detail & Related papers (2021-09-16T01:14:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.