Enhanced Maximum Independent Set Preparation with Rydberg Atoms Guided by the Spectral Gap
- URL: http://arxiv.org/abs/2602.17991v1
- Date: Fri, 20 Feb 2026 04:58:12 GMT
- Title: Enhanced Maximum Independent Set Preparation with Rydberg Atoms Guided by the Spectral Gap
- Authors: Seokho Jeong, Minhyuk Kim,
- Abstract summary: We introduce a spectral-gap-guided schedule engineering method that modifies the laser detuning profile to suppress leakage.<n>We experimentally benchmark ADGLB on a quasi-one-dimensional chain of $N=10$ atoms.<n>We show that the schedule optimized for smaller instances can be directly applied to larger two-dimensional triangular lattices with $N=25$ and $N=37$.
- Score: 4.082216579462797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adiabatic quantum computation with Rydberg atoms provides a natural route for solving combinatorial optimization problems such as the maximum independent set (MIS). However, its performance is fundamentally limited by the reduction of the spectral gap with increasing system size and connectivity, which induces population leakage from the ground state during finite-time evolution. Here we introduce the Adjusted Detuning for Ground-Energy Leakage Blockade (ADGLB), a spectral-gap-guided schedule engineering method that modifies the laser detuning profile to suppress leakage without introducing additional Hamiltonian terms or iterative optimization loops. We experimentally benchmark ADGLB on a quasi-one-dimensional chain of $N=10$ atoms, and the MIS preparation probability increases substantially compared with the standard adiabatic schedule. Furthermore, we show that the schedule optimized for smaller instances can be directly applied to larger two-dimensional triangular lattices with $N=25$ and $N=37$. With a small heuristic offset, the method also remains effective for instances with higher hardness parameters. These findings demonstrate that spectral-gap-guided schedule engineering offers a scalable and hardware-efficient strategy for enhancing adiabatic quantum optimization on neutral-atom platforms.
Related papers
- Scaling QAOA: transferring optimal adiabatic schedules from small-scale to large-scale variational circuits [0.0]
We propose a schedule-learning framework that transfers spectral-gap-informed adiabatic control strategies from small-scale instances to larger systems.<n>Our results suggest that gap-informed schedule transfers provide a scalable and parameter-efficient strategy for QAOA.
arXiv Detail & Related papers (2026-02-16T18:12:13Z) - Controlled LLM Training on Spectral Sphere [76.60985966206746]
We introduce the textbfSpectral Sphere algorithm (SSO), which enforces strict module-wise spectral constraints on both weights and their updates.<n>We observe significant practical stability benefits, including improved MoE router load balancing, suppressed outliers, and strictly bounded activations.
arXiv Detail & Related papers (2026-01-13T09:59:47Z) - Quantum Approximate Optimization Algorithm with Fixed Number of Parameters [0.0]
We introduce a novel quantum optimization paradigm: the Fixed--Count Approximate Quantum Optimization Algorithm (FPC-QAOA)<n>It is a scalable variational framework that maintains a constant number of trainable parameters regardless of the number of qubits, Hamiltonian complexity, or circuit depth.<n>We benchmark FPC-QAOA on random MaxCut instances and the Tail Assignment Problem, achieving performance comparable to or better than standard QAOA.
arXiv Detail & Related papers (2025-12-24T14:02:31Z) - Quantum State Preparation via Schmidt Spectrum Optimisation [0.0]
We introduce an efficient algorithm for the systematic design of shallow-depth quantum circuits.<n>The proposed method leverages Schmidt spectrum optimization (SSO) to minimize circuit depth.<n>We demonstrate state-of-the-art shallow-depth performance, improving accuracy by up to an order of magnitude over existing methods.
arXiv Detail & Related papers (2025-12-23T17:27:32Z) - Lighter-X: An Efficient and Plug-and-play Strategy for Graph-based Recommendation through Decoupled Propagation [49.865020394064096]
We propose textbfLighter-X, an efficient and modular framework that can be seamlessly integrated with existing GNN-based recommender architectures.<n>Our approach substantially reduces both parameter size and computational complexity while preserving the theoretical guarantees and empirical performance of the base models.<n>Experiments demonstrate that Lighter-X achieves comparable performance to baseline models with significantly fewer parameters.
arXiv Detail & Related papers (2025-10-11T08:33:08Z) - PT$^2$-LLM: Post-Training Ternarization for Large Language Models [52.4629647715623]
Large Language Models (LLMs) have shown impressive capabilities across diverse tasks, but their large memory and compute demands hinder deployment.<n>We propose PT$2$-LLM, a post-training ternarization framework tailored for LLMs.<n>At its core is an Asymmetric Ternary Quantizer equipped with a two-stage refinement pipeline.
arXiv Detail & Related papers (2025-09-27T03:01:48Z) - MPQ-DMv2: Flexible Residual Mixed Precision Quantization for Low-Bit Diffusion Models with Temporal Distillation [74.34220141721231]
We present MPQ-DMv2, an improved textbfMixed textbfPrecision textbfQuantization framework for extremely low-bit textbfDiffusion textbfModels.
arXiv Detail & Related papers (2025-07-06T08:16:50Z) - A quantum wire approach to weighted combinatorial graph optimisation problems [0.0]
We present and experimentally demonstrate an efficient encoding scheme based on chains of Rydberg-blockaded atoms.<n>We embed maximum weighted independent set (MWIS) and quadratic unconstrained binary optimization (QUBO) problems on a neutral atom architecture.
arXiv Detail & Related papers (2025-03-21T13:00:51Z) - RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models [53.571195477043496]
We propose an algorithm named Rotated Straight-Through-Estimator (RoSTE)<n>RoSTE combines quantization-aware supervised fine-tuning (QA-SFT) with an adaptive rotation strategy to reduce activation outliers.<n>Our findings reveal that the prediction error is directly proportional to the quantization error of the converged weights, which can be effectively managed through an optimized rotation configuration.
arXiv Detail & Related papers (2025-02-13T06:44:33Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.