Efficient Image Reconstruction Architecture for Neutral Atom Quantum Computing
- URL: http://arxiv.org/abs/2603.03149v1
- Date: Tue, 03 Mar 2026 16:40:24 GMT
- Title: Efficient Image Reconstruction Architecture for Neutral Atom Quantum Computing
- Authors: Jonas Winklmann, Yian Yu, Xiaorang Guo, Korbinian Staudacher, Martin Schulz,
- Abstract summary: neutral atom quantum computers (NAQCs) have attracted a lot of attention, primarily due to their long coherence times and good scalability.<n>One of their main drawbacks is their comparatively time-consuming control overhead.<n>We propose a highly-parallel atom-detection accelerator for tweezer-based NAQCs.
- Score: 2.579336620638783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, neutral atom quantum computers (NAQCs) have attracted a lot of attention, primarily due to their long coherence times and good scalability. One of their main drawbacks is their comparatively time-consuming control overhead, with one of the main contributing procedures being the detection of individual atoms and measurement of their states, each occurring at least once per compute cycle and requiring fluorescence imaging and subsequent image analysis. To reduce the required time budget, we propose a highly-parallel atom-detection accelerator for tweezer-based NAQCs. Building on an existing solution, our design combines algorithm-level optimization with a field-programmable gate array (FPGA) implementation to maximize parallelism and reduce the run time of the image analysis process. Our design can analyze a 256$\times$256-pixel image representing a 10$\times$10 atom array in just 115 $μ$s on a Xilinx UltraScale+ FPGA. Compared to the original CPU baseline and our optimized CPU version, we achieve about 34.9$\times$ and 6.3$\times$ speedup of the reconstruction time, respectively. Moreover, this work also contributes to the ongoing efforts toward fully integrated FPGA-based control systems for NAQCs.
Related papers
- Bridging Superconducting and Neutral-Atom Platforms for Efficient Fault-Tolerant Quantum Architectures [14.971894680142343]
We propose a strategic approach to Heterogeneous Quantum Architectures (HQA) that synthesizes the advantages of the superconducting (SC) and neutral atom (NA) platforms.<n>Our designs achieve $752times$ speedup over NA-only baselines on average and reduce the physical qubit footprint by over $10times$ compared to SC-only systems.
arXiv Detail & Related papers (2026-01-15T07:39:05Z) - Resource Analysis of Low-Overhead Transversal Architectures for Reconfigurable Atom Arrays [38.6948808036416]
We present a low-overhead architecture that supports the layout and resource estimation of large-scale fault-tolerant quantum algorithms.<n>We find that a 2048-bit RSA factoring can be executed with 19 million qubits in 5.6 days, for 1 ms QEC cycle times.
arXiv Detail & Related papers (2025-05-21T18:00:18Z) - HQViT: Hybrid Quantum Vision Transformer for Image Classification [48.72766405978677]
We propose a Hybrid Quantum Vision Transformer (HQViT) to accelerate model training while enhancing model performance.<n>HQViT introduces whole-image processing with amplitude encoding to better preserve global image information without additional positional encoding.<n>Experiments across various computer vision datasets demonstrate that HQViT outperforms existing models, achieving a maximum improvement of up to $10.9%$ (on the MNIST 10-classification task) over the state of the art.
arXiv Detail & Related papers (2025-04-03T16:13:34Z) - Design of an FPGA-Based Neutral Atom Rearrangement Accelerator for Quantum Computing [1.003635085077511]
Neutral atoms have emerged as a promising technology for implementing quantum computers.
We propose a novel quadrant-based rearrangement algorithm that employs a divide-and-conquer strategy and also enables the simultaneous movement of multiple atoms.
This is the first hardware acceleration work for atom rearrangement, and it significantly reduces the processing time.
arXiv Detail & Related papers (2024-11-19T10:38:21Z) - HAPM -- Hardware Aware Pruning Method for CNN hardware accelerators in resource constrained devices [44.99833362998488]
The present work proposes a generic hardware architecture ready to be implemented on FPGA devices.
The inference speed of the design is evaluated over different resource constrained FPGA devices.
We demonstrate that our hardware-aware pruning algorithm achieves a remarkable improvement of a 45 % in inference time compared to a network pruned using the standard algorithm.
arXiv Detail & Related papers (2024-08-26T07:27:12Z) - Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers [56.37495946212932]
Vision transformers (ViTs) have demonstrated their superior accuracy for computer vision tasks compared to convolutional neural networks (CNNs)
This work proposes Quasar-ViT, a hardware-oriented quantization-aware architecture search framework for ViTs.
arXiv Detail & Related papers (2024-07-25T16:35:46Z) - TeMPO: Efficient Time-Multiplexed Dynamic Photonic Tensor Core for Edge
AI with Compact Slow-Light Electro-Optic Modulator [44.74560543672329]
We present a time-multiplexed dynamic photonic tensor accelerator, dubbed TeMPO, with cross-layer device/circuit/architecture customization.
We achieve a 368.6 TOPS peak performance, 22.3 TOPS/W energy efficiency, and 1.2 TOPS/mm$2$ compute density.
This work signifies the power of cross-layer co-design and domain-specific customization, paving the way for future electronic-photonic accelerators.
arXiv Detail & Related papers (2024-02-12T03:40:32Z) - Many-body computing on Field Programmable Gate Arrays [5.3808713424582395]
We leverage the capabilities of Field Programmable Gate Arrays (FPGAs) for conducting quantum many-body calculations.<n>This has resulted in a tenfold speedup compared to CPU-based computation for a Monte Carlo algorithm.<n>For the first time, the utilization of FPGA to accelerate a typical tensor network algorithm for many-body ground state calculations.
arXiv Detail & Related papers (2024-02-09T14:01:02Z) - A Cost-Efficient FPGA Implementation of Tiny Transformer Model using Neural ODE [0.8403582577557918]
Transformer has been adopted to image recognition tasks and shown to outperform CNNs and RNNs while it suffers from high training cost and computational complexity.
We propose a lightweight hybrid model which uses Neural ODE as a backbone instead of ResNet.
The proposed model is deployed on a modest-sized FPGA device for edge computing.
arXiv Detail & Related papers (2024-01-05T09:32:39Z) - Efficient algorithms to solve atom reconfiguration problems. II. The assignment-rerouting-ordering (aro) algorithm [35.300779480388705]
atom reconfiguration problems require solving an atom problem quickly and efficiently.<n>A typical approach to solve atom reconfiguration problems is to use an assignment algorithm to determine which atoms to move to which traps.<n>This approach does not optimize for the number of displaced atoms or the number of times each atom is displaced.<n>We propose the assignment-rerouting-ordering (aro) algorithm to improve the performance of assignment-based algorithms in solving atom reconfiguration problems.
arXiv Detail & Related papers (2022-12-11T19:48:25Z) - Efficient algorithms to solve atom reconfiguration problems. I. The redistribution-reconfiguration (red-rec) algorithm [35.300779480388705]
Red-rec exploits simples and exact subroutines to solve atom reconfiguration problems on grids.<n>Red-rec enables assembling large configurations of atoms with high mean success probability.
arXiv Detail & Related papers (2022-12-07T19:00:01Z) - Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
Algorithm Co-design [66.39546326221176]
Attention-based neural networks have become pervasive in many AI tasks.
The use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources.
This paper proposes a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs.
arXiv Detail & Related papers (2022-09-20T09:28:26Z) - Single-Shot Optical Neural Network [55.41644538483948]
'Weight-stationary' analog optical and electronic hardware has been proposed to reduce the compute resources required by deep neural networks.
We present a scalable, single-shot-per-layer weight-stationary optical processor.
arXiv Detail & Related papers (2022-05-18T17:49:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.