End-to-End Analysis of Charge Stability Diagrams with Transformers
- URL: http://arxiv.org/abs/2508.15710v1
- Date: Thu, 21 Aug 2025 16:54:22 GMT
- Title: End-to-End Analysis of Charge Stability Diagrams with Transformers
- Authors: Rahul Marchand, Lucas Schorling, Cornelius Carlsson, Jonas Schuff, Barnaby van Straaten, Taylor L. Patti, Federico Fedele, Joshua Ziegler, Parth Girdhar, Pranav Vaidhyanathan, Natalia Ares,
- Abstract summary: Transformer models and end-to-end learning frameworks are rapidly revolutionizing the field of artificial intelligence.<n>In this work, we apply object detection transformers to analyze charge stability diagrams in semiconductor quantum dot arrays.<n>We show that it surpasses convolutional neural networks in performance on three different spin qubit architectures.
- Score: 0.7570270968488804
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Transformer models and end-to-end learning frameworks are rapidly revolutionizing the field of artificial intelligence. In this work, we apply object detection transformers to analyze charge stability diagrams in semiconductor quantum dot arrays, a key task for achieving scalability with spin-based quantum computing. Specifically, our model identifies triple points and their connectivity, which is crucial for virtual gate calibration, charge state initialization, drift correction, and pulse sequencing. We show that it surpasses convolutional neural networks in performance on three different spin qubit architectures, all without the need for retraining. In contrast to existing approaches, our method significantly reduces complexity and runtime, while enhancing generalizability. The results highlight the potential of transformer-based end-to-end learning frameworks as a foundation for a scalable, device- and architecture-agnostic tool for control and tuning of quantum dot devices.
Related papers
- Quantum Decision Transformers (QDT): Synergistic Entanglement and Interference for Offline Reinforcement Learning [0.2538209532048867]
We introduce the Quantum Decision Transformer (QDT), a novel architecture incorporating quantum-inspired computational mechanisms.<n>Our approach integrates two core components: Quantum-Inspired Attention with entanglement operations that capture non-local feature correlations, and Quantum Feedforward Networks with multi-path processing and learnable interference for adaptive computation.
arXiv Detail & Related papers (2025-12-09T16:47:37Z) - Making Neural Networks More Suitable for Approximate Clifford+T Circuit Synthesis [0.7449724123186384]
We develop deep learning techniques that improve performance on reinforcement learning guided quantum circuit synthesis.<n>We show how augmenting data with small random unitary perturbations during training enables more robust learning.<n>We also show how encoding numerical data with techniques from image processing allow networks to better detect small but significant changes in data.
arXiv Detail & Related papers (2025-04-22T15:51:32Z) - A Survey of Quantum Transformers: Architectures, Challenges and Outlooks [82.4736481748099]
Quantum Transformers integrate the representational power of classical Transformers with the computational advantages of quantum computing.<n>Since 2022, research in this area has rapidly expanded, giving rise to diverse technical paradigms and early applications.<n>This paper presents the first comprehensive, systematic, and in-depth survey of quantum Transformer models.
arXiv Detail & Related papers (2025-04-04T05:40:18Z) - BHViT: Binarized Hybrid Vision Transformer [53.38894971164072]
Model binarization has made significant progress in enabling real-time and energy-efficient computation for convolutional neural networks (CNN)<n>We propose BHViT, a binarization-friendly hybrid ViT architecture and its full binarization model with the guidance of three important observations.<n>Our proposed algorithm achieves SOTA performance among binary ViT methods.
arXiv Detail & Related papers (2025-03-04T08:35:01Z) - UDiTQC: U-Net-Style Diffusion Transformer for Quantum Circuit Synthesis [13.380226276791818]
Current diffusion model approaches based on U-Net architectures, while promising, encounter challenges related to computational efficiency and modeling global context.<n>We propose UDiT,a novel U-Net-style Diffusion Transformer architecture, which combines U-Net's strengths in multi-scale feature extraction with the Transformer's ability to model global context.
arXiv Detail & Related papers (2025-01-24T15:15:50Z) - Regression and Classification with Single-Qubit Quantum Neural Networks [0.0]
We use a resource-efficient and scalable Single-Qubit Quantum Neural Network (SQQNN) for both regression and classification tasks.<n>For classification, we introduce a novel training method inspired by the Taylor series, which can efficiently find a global minimum in a single step.<n>The SQQNN exhibits virtually error-free and strong performance in regression and classification tasks, including the MNIST dataset.
arXiv Detail & Related papers (2024-12-12T17:35:36Z) - Learning with SASQuaTCh: a Novel Variational Quantum Transformer Architecture with Kernel-Based Self-Attention [0.464982780843177]
We present a variational quantum circuit architecture named Self-Attention Sequential Quantum Transformer Channel (SASQuaT)<n>Our approach leverages recent insights from kernel-based operator learning in the context of predicting vision transformer network using simple gate operations and a set of multi-dimensional quantum Fourier transforms.<n>To validate our approach, we consider image classification tasks in simulation and with hardware, where with only 9 qubits and a handful of parameters we are able to simultaneously embed and classify a grayscale image of handwritten digits with high accuracy.
arXiv Detail & Related papers (2024-03-21T18:00:04Z) - Towards Fault-Tolerant Quantum Deep Learning: Designing and Analyzing Quantum ResNet and Transformer with Quantum Arithmetic and Linear Algebra Primitives [5.89456905230997]
We introduce a framework to overcome the dual challenges of constructing deep architectures and the prohibitive overhead of data loading and measurement.<n>Our framework enables the design of multi-layer Quantum ResNet and Quantum Transformer models.<n>A cornerstone of our approach is a novel data transfer protocol, Discrete Chebyshev Decomposition (DCD), which facilitates this modularity.
arXiv Detail & Related papers (2024-02-29T08:12:02Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z) - Transformers Solve the Limited Receptive Field for Monocular Depth
Prediction [82.90445525977904]
We propose TransDepth, an architecture which benefits from both convolutional neural networks and transformers.
This is the first paper which applies transformers into pixel-wise prediction problems involving continuous labels.
arXiv Detail & Related papers (2021-03-22T18:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.