Gate Set Tomography
- URL: http://arxiv.org/abs/2009.07301v2
- Date: Tue, 28 Sep 2021 18:31:54 GMT
- Title: Gate Set Tomography
- Authors: Erik Nielsen, John King Gamble, Kenneth Rudinger, Travis Scholten,
Kevin Young, Robin Blume-Kohout
- Abstract summary: Gate set tomography ( GST) is a protocol for detailed, predictive characterization of logic operations (gates) on quantum computing processors.
This paper presents the foundations of GST in comprehensive detail.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gate set tomography (GST) is a protocol for detailed, predictive
characterization of logic operations (gates) on quantum computing processors.
Early versions of GST emerged around 2012-13, and since then it has been
refined, demonstrated, and used in a large number of experiments. This paper
presents the foundations of GST in comprehensive detail. The most important
feature of GST, compared to older state and process tomography protocols, is
that it is calibration-free. GST does not rely on pre-calibrated state
preparations and measurements. Instead, it characterizes all the operations in
a gate set simultaneously and self-consistently, relative to each other. Long
sequence GST can estimate gates with very high precision and efficiency,
achieving Heisenberg scaling in regimes of practical interest. In this paper,
we cover GST's intellectual history, the techniques and experiments used to
achieve its intended purpose, data analysis, gauge freedom and fixing, error
bars, and the interpretation of gauge-fixed estimates of gate sets. Our focus
is fundamental mathematical aspects of GST, rather than implementation details,
but we touch on some of the foundational algorithmic tricks used in the pyGSTi
implementation.
Related papers
- OKG-LLM: Aligning Ocean Knowledge Graph with Observation Data via LLMs for Global Sea Surface Temperature Prediction [70.48962924608033]
This work presents the first systematic effort to construct an Ocean Knowledge Graph (OKG) specifically designed to represent diverse ocean knowledge for SST prediction.<n>We develop a graph embedding network to learn the comprehensive semantic and structural knowledge within the OKG, capturing both the unique characteristics of individual sea regions and the complex correlations between them. Finally, we align the learned knowledge with fine-grained numerical SST data and leverage a pre-trained LLM to model SST patterns for accurate prediction.
arXiv Detail & Related papers (2025-07-31T02:06:03Z) - The Geometry of LLM Quantization: GPTQ as Babai's Nearest Plane Algorithm [52.89358421626026]
GPTQ emerged as one of the standard methods for one-shot post-training quantization at LLM scale.<n>We show that GPTQ is mathematically identical to Babai's nearest plane algorithm for the classical closest vector problem.
arXiv Detail & Related papers (2025-07-24T16:22:18Z) - Context-aware gate set tomography: Improving the self-consistent characterization of trapped-ion universal gate sets by leveraging non-Markovianity [49.1574468325115]
Gate set tomography ( GST) estimates the complete set of noisy quantum gates, state preparations, and measurements.<n>In its original incarnation, GST improves the estimation precision by applying the gates sequentially.<n>We show that context dependence can be incorporated in the parametrization of the gate set, allowing us to reduce the sampling cost of GST.
arXiv Detail & Related papers (2025-07-03T11:37:36Z) - Quantize What Counts: Bit Allocation Insights Informed by Spectral Gaps in Keys and Values [57.54443445583921]
We provide two novel theorems aimed at enhancing KV quantization methods.<n>Our first theorem, termed Key-Value Norm Disparity, states that the key weight matrices by nature carry richer information.<n>Our second theorem, Key-Driven Quantization, posits that prioritizing the quantization precision of keys over values induces significant improvements to the overall quantization performance.
arXiv Detail & Related papers (2025-02-20T22:24:27Z) - On the Convergence of DP-SGD with Adaptive Clipping [56.24689348875711]
Gradient Descent with gradient clipping is a powerful technique for enabling differentially private optimization.
This paper provides the first comprehensive convergence analysis of SGD with quantile clipping (QC-SGD)
We show how QC-SGD suffers from a bias problem similar to constant-threshold clipped SGD but can be mitigated through a carefully designed quantile and step size schedule.
arXiv Detail & Related papers (2024-12-27T20:29:47Z) - Microscopic parametrizations for gate set tomography under coloured noise [0.0]
We show that a microscopic parametrization of quantum gates under time-correlated noise on the driving phase reduces the required resources.
We discuss the minimal parametrizations of the gate set that include the effect of finite correlation times and non-Markovian quantum evolutions.
arXiv Detail & Related papers (2024-07-16T09:39:52Z) - Sparse is Enough in Fine-tuning Pre-trained Large Language Models [98.46493578509039]
We propose a gradient-based sparse fine-tuning algorithm, named Sparse Increment Fine-Tuning (SIFT)
We validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning.
arXiv Detail & Related papers (2023-12-19T06:06:30Z) - Near-Minimal Gate Set Tomography Experiment Designs [0.0]
We show how to streamline GST experiment designs by removing almost all redundancy.
We do this by analyzing the "germ" subroutines at the heart of GST circuits.
New experiment designs can match the precision of previous GST experiments with significantly fewer circuits.
arXiv Detail & Related papers (2023-08-17T04:46:25Z) - Two-Qubit Gate Set Tomography with Fewer Circuits [0.0]
We show how to exploit the structure of GST circuits to determine which ones are superfluous.
We also explore the impact of these techniques on the prospects of three-qubit GST.
arXiv Detail & Related papers (2023-07-28T18:52:34Z) - Learning Large Graph Property Prediction via Graph Segment Training [61.344814074335304]
We propose a general framework that allows learning large graph property prediction with a constant memory footprint.
We refine the GST paradigm by introducing a historical embedding table to efficiently obtain embeddings for segments not sampled for backpropagation.
Our experiments show that GST-EFD is both memory-efficient and fast, while offering a slight boost on test accuracy over a typical full graph training regime.
arXiv Detail & Related papers (2023-05-21T02:53:25Z) - From Gradient Flow on Population Loss to Learning with Stochastic
Gradient Descent [50.4531316289086]
Gradient Descent (SGD) has been the method of choice for learning large-scale non-root models.
An overarching paper is providing general conditions SGD converges, assuming that GF on the population loss converges.
We provide a unified analysis for GD/SGD not only for classical settings like convex losses, but also for more complex problems including Retrieval Matrix sq-root.
arXiv Detail & Related papers (2022-10-13T03:55:04Z) - Efficient characterization of qudit logical gates with gate set tomography using an error-free Virtual-Z-gate model [0.0]
We propose a more efficient GST approach for qudits, utilizing the qudit Hadamard and virtual Z gates to construct fiducials.
Our method reduces the computational costs of estimating characterization results, making GST more practical at scale.
arXiv Detail & Related papers (2022-10-10T17:20:25Z) - Tight Cram\'{e}r-Rao type bounds for multiparameter quantum metrology
through conic programming [61.98670278625053]
It is paramount to have practical measurement strategies that can estimate incompatible parameters with best precisions possible.
Here, we give a concrete way to find uncorrelated measurement strategies with optimal precisions.
We show numerically that there is a strict gap between the previous efficiently computable bounds and the ultimate precision bound.
arXiv Detail & Related papers (2022-09-12T13:06:48Z) - Learning Structures in Earth Observation Data with Gaussian Processes [67.27044745471207]
This paper reviews the main theoretical GP developments in the field.
New algorithms that respect the signal and noise characteristics, that provide feature rankings automatically, and that allow applicability of associated uncertainty intervals are discussed.
arXiv Detail & Related papers (2020-12-22T10:46:37Z) - Efficient and Stable Graph Scattering Transforms via Pruning [86.76336979318681]
Graph scattering transforms ( GSTs) offer training-free deep GCN models that extract features from graph data.
The price paid by GSTs is exponential complexity in space and time that increases with the number of layers.
The present work addresses the complexity limitation of GSTs by introducing an efficient so-termed pruned (p) GST approach.
arXiv Detail & Related papers (2020-01-27T16:05:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.