Multipoint-BAX: A New Approach for Efficiently Tuning Particle
Accelerator Emittance via Virtual Objectives
- URL: http://arxiv.org/abs/2209.04587v5
- Date: Tue, 19 Dec 2023 23:40:27 GMT
- Title: Multipoint-BAX: A New Approach for Efficiently Tuning Particle
Accelerator Emittance via Virtual Objectives
- Authors: Sara A. Miskovich, Willie Neiswanger, William Colocho, Claudio Emma,
Jacqueline Garrahan, Timothy Maxwell, Christopher Mayes, Stefano Ermon,
Auralee Edelen, Daniel Ratner
- Abstract summary: We propose a new information-theoretic algorithm, Multipoint-BAX, for black-box optimization on multipoint queries.
We use Multipoint-BAX to minimize emittance at the Linac Coherent Light Source (LCLS) and the Facility for Advanced Accelerator Experimental Tests II (FACET-II)
- Score: 47.52324722637079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although beam emittance is critical for the performance of high-brightness
accelerators, optimization is often time limited as emittance calculations,
commonly done via quadrupole scans, are typically slow. Such calculations are a
type of $\textit{multipoint query}$, i.e. each query requires multiple
secondary measurements. Traditional black-box optimizers such as Bayesian
optimization are slow and inefficient when dealing with such objectives as they
must acquire the full series of measurements, but return only the emittance,
with each query. We propose a new information-theoretic algorithm,
Multipoint-BAX, for black-box optimization on multipoint queries, which queries
and models individual beam-size measurements using techniques from Bayesian
Algorithm Execution (BAX). Our method avoids the slow multipoint query on the
accelerator by acquiring points through a $\textit{virtual objective}$, i.e.
calculating the emittance objective from a fast learned model rather than
directly from the accelerator. We use Multipoint-BAX to minimize emittance at
the Linac Coherent Light Source (LCLS) and the Facility for Advanced
Accelerator Experimental Tests II (FACET-II). In simulation, our method is
20$\times$ faster and more robust to noise compared to existing methods. In
live tests, it matched the hand-tuned emittance at FACET-II and achieved a 24%
lower emittance than hand-tuning at LCLS. Our method represents a conceptual
shift for optimizing multipoint queries, and we anticipate that it can be
readily adapted to similar problems in particle accelerators and other
scientific instruments.
Related papers
- EXAQ: Exponent Aware Quantization For LLMs Acceleration [15.610222058802005]
We propose an analytical approach to determine the optimal clipping value for the input to the softmax function.
This method accelerates the calculations of both $ex$ and $sum(ex)$ with minimal to no accuracy degradation.
This ultra-low bit quantization allows, for the first time, an acceleration of approximately 4x in the accumulation phase.
arXiv Detail & Related papers (2024-10-04T06:54:30Z) - EfficientQAT: Efficient Quantization-Aware Training for Large Language Models [50.525259103219256]
quantization-aware training (QAT) offers a solution by reducing memory consumption through low-bit representations with minimal accuracy loss.
We propose Efficient Quantization-Aware Training (EfficientQAT), a more feasible QAT algorithm.
EfficientQAT involves two consecutive phases: Block-wise training of all parameters (Block-AP) and end-to-end training of quantization parameters (E2E-QP)
arXiv Detail & Related papers (2024-07-10T17:53:30Z) - Parameter-efficient Tuning of Large-scale Multimodal Foundation Model [68.24510810095802]
We propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges.
Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal prompt tuning.
A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach.
arXiv Detail & Related papers (2023-05-15T06:40:56Z) - Tuning Particle Accelerators with Safety Constraints using Bayesian
Optimization [73.94660141019764]
tuning machine parameters of particle accelerators is a repetitive and time-consuming task.
We propose and evaluate a step size-limited variant of safe Bayesian optimization.
arXiv Detail & Related papers (2022-03-26T02:21:03Z) - Recommender System Expedited Quantum Control Optimization [0.0]
Quantum control optimization algorithms are routinely used to generate optimal quantum gates or efficient quantum state transfers.
There are two main challenges in designing efficient optimization algorithms, namely overcoming the sensitivity to local optima and improving the computational speed.
Here, we propose and demonstrate the use of a machine learning method, specifically the recommender system (RS), to deal with the latter challenge.
arXiv Detail & Related papers (2022-01-29T10:25:41Z) - Efficient Exploration in Binary and Preferential Bayesian Optimization [0.5076419064097732]
We show that it is important for BO algorithms to distinguish between different types of uncertainty.
We propose several new acquisition functions that outperform state-of-the-art BO functions.
arXiv Detail & Related papers (2021-10-18T14:44:34Z) - Fast Variational AutoEncoder with Inverted Multi-Index for Collaborative
Filtering [59.349057602266]
Variational AutoEncoder (VAE) has been extended as a representative nonlinear method for collaborative filtering.
We propose to decompose the inner-product-based softmax probability based on the inverted multi-index.
FastVAE can outperform the state-of-the-art baselines in terms of both sampling quality and efficiency.
arXiv Detail & Related papers (2021-09-13T08:31:59Z) - Multi-objective Recurrent Neural Networks Optimization for the Edge -- a
Quantization-based Approach [2.1987431057890467]
This article introduces a Multi-Objective Hardware-Aware Quantization (MOHAQ) method, which considers both hardware efficiency and inference error as objectives for mixed-precision quantization.
We propose a search technique named "beacon-based search" to retrain selected solutions only in search space and use them as beacons to know effect of retraining on other solutions.
arXiv Detail & Related papers (2021-08-02T22:09:12Z) - BiAdam: Fast Adaptive Bilevel Optimization Methods [104.96004056928474]
Bilevel optimization has attracted increased interest in machine learning due to its many applications.
We provide a useful analysis framework for both the constrained and unconstrained optimization.
arXiv Detail & Related papers (2021-06-21T20:16:40Z) - Differentiable Expected Hypervolume Improvement for Parallel
Multi-Objective Bayesian Optimization [11.956059322407437]
We leverage recent advances in programming models and hardware acceleration for multi-objective BO using Expected Hyper Improvement (EHVI)
We derive a novel formulation of q-Expected Hyper Improvement (qEHVI), an acquisition function that extends EHVI to the parallel, constrained evaluation setting.
Our empirical evaluation demonstrates that qEHVI is computationally tractable in many practical scenarios and outperforms state-of-the-art multi-objective BO algorithms at a fraction of their wall time.
arXiv Detail & Related papers (2020-06-09T06:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.