atommovr: An open-source simulation framework for rearrangement in atomic arrays
- URL: http://arxiv.org/abs/2508.02670v1
- Date: Mon, 04 Aug 2025 17:59:47 GMT
- Title: atommovr: An open-source simulation framework for rearrangement in atomic arrays
- Authors: Nikhil K Harle, Bo-Yu Chen, Bob Bao, Hannes Bernien,
- Abstract summary: atom rearrangement is a fundamental building block for the development of neutral atom-based quantum processors.<n>We develop an open-source simulation framework for developing, comparing, and benchmarking algorithms.<n>We develop a naive dual-species algorithm able to prepare arbitrary targets with near-unity success rate.
- Score: 5.442106161233214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of atom rearrangement has emerged in the last decade as a fundamental building block for the development of neutral atom-based quantum processors. However, despite many recent efforts to develop algorithms with favorable asymptotic scaling, no time-optimal algorithm has been developed for any rearrangement task. Moreover, no open-source code exists to reproduce or benchmark existing algorithms, and to assist the development of new rearrangement protocols. To address this deficiency, we develop an open-source simulation framework for developing, comparing, and benchmarking algorithms under realistic and customizable noise models. Using this framework, we \textbf{a)} numerically extract lower bounds for the scaling of a time-optimal rearrangement algorithm and compare it to existing heuristic algorithms \textbf{b)} develop a naive dual-species algorithm able to prepare arbitrary targets with near-unity success rate. With this framework, we hope to develop a common tool for the community to study rearrangement, lower the barrier to entry for new experimental groups, and stimulate progress in developing algorithms which approach time-optimal scaling.
Related papers
- Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning
Problems [0.7614628596146599]
We propose a novel algorithm for adaptive step length selection in the classical SGD framework.
Under reasonable conditions, the algorithm produces step lengths in line with well-established theoretical requirements.
We show that the algorithm can generate step lengths comparable to the best step length obtained from manual tuning.
arXiv Detail & Related papers (2023-05-17T06:22:11Z) - Hybrid ACO-CI Algorithm for Beam Design problems [0.4397520291340694]
A novel hybrid version of the Ant colony optimization (ACO) method is developed using the sample space reduction technique of the Cohort Intelligence (CI) algorithm.
The proposed work could be investigate for real world applications encompassing domains of engineering, and health care problems.
arXiv Detail & Related papers (2023-03-29T04:37:14Z) - Runtime Analysis of Competitive co-Evolutionary Algorithms for Maximin Optimisation of a Bilinear Function [1.3053649021965603]
Co-evolutionary algorithms have a wide range of applications, such as in hardware design, evolution of strategies for board games, and patching software bugs.
It is an open challenge to develop a theory that can predict when co-evolutionary algorithms find solutions efficiently and reliable.
This paper provides a first step in developing runtime analysis for population-based competitive co-evolutionary algorithms.
arXiv Detail & Related papers (2022-06-30T12:35:36Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Community detection using fast low-cardinality semidefinite programming [94.4878715085334]
We propose a new low-cardinality algorithm that generalizes the local update to maximize a semidefinite relaxation derived from Leiden-k-cut.
This proposed algorithm is scalable, outperforms state-of-the-art algorithms, and outperforms in real-world time with little additional cost.
arXiv Detail & Related papers (2020-12-04T15:46:30Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Automatic Generation of Algorithms for Black-Box Robust Optimisation
Problems [0.0]
We develop algorithms capable of tackling robust black-box optimisation problems, where the number of model runs is limited.
We employ an automatic generation of algorithms approach: Grammar-Guided Genetic Programming.
Our algorithmic building blocks combine elements of existing techniques and new features, resulting in the investigation of a novel solution space.
arXiv Detail & Related papers (2020-04-15T18:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.