Prior-mean-assisted Bayesian optimization application on FRIB Front-End
tunning
- URL: http://arxiv.org/abs/2211.06400v1
- Date: Fri, 11 Nov 2022 18:34:15 GMT
- Title: Prior-mean-assisted Bayesian optimization application on FRIB Front-End
tunning
- Authors: Kilean Hwang, Tomofumi Maruta, Alexander Plastun, Kei Fukushima, Tong
Zhang, Qiang Zhao, Peter Ostroumov, Yue Hao
- Abstract summary: We exploit a neural network model trained over historical data as a prior mean of BO for FRIB Front-End tuning.
In this paper, we exploit a neural network model trained over historical data as a prior mean of BO for FRIB Front-End tuning.
- Score: 61.78406085010957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization~(BO) is often used for accelerator tuning due to its
high sample efficiency. However, the computational scalability of training over
large data-set can be problematic and the adoption of historical data in a
computationally efficient way is not trivial. Here, we exploit a neural network
model trained over historical data as a prior mean of BO for FRIB Front-End
tuning.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Informed Spectral Normalized Gaussian Processes for Trajectory Prediction [0.0]
We propose a novel regularization-based continual learning method for SNGPs.
Our proposal builds upon well-established methods and requires no rehearsal memory or parameter expansion.
We apply our informed SNGP model to the trajectory prediction problem in autonomous driving by integrating prior drivability knowledge.
arXiv Detail & Related papers (2024-03-18T17:05:24Z) - PG-LBO: Enhancing High-Dimensional Bayesian Optimization with
Pseudo-Label and Gaussian Process Guidance [31.585328335396607]
Current mainstream methods overlook the potential of utilizing a pool of unlabeled data to construct the latent space.
We propose a novel method to effectively utilize unlabeled data with the guidance of labeled data.
Our proposed method outperforms existing VAE-BO algorithms in various optimization scenarios.
arXiv Detail & Related papers (2023-12-28T11:57:58Z) - Optimizing Closed-Loop Performance with Data from Similar Systems: A
Bayesian Meta-Learning Approach [1.370633147306388]
We propose the use of meta-learning to generate an initial surrogate model based on data collected from performance optimization tasks.
The effectiveness of our proposed DKN-BO approach for speeding up control system performance optimization is demonstrated.
arXiv Detail & Related papers (2022-10-31T18:25:47Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Feasible Low-thrust Trajectory Identification via a Deep Neural Network
Classifier [1.5076964620370268]
This work proposes a deep neural network (DNN) to accurately identify feasible low thrust transfer prior to the optimization process.
The DNN-classifier achieves an overall accuracy of 97.9%, which has the best performance among the tested algorithms.
arXiv Detail & Related papers (2022-02-10T11:34:37Z) - Improved Fine-tuning by Leveraging Pre-training Data: Theory and
Practice [52.11183787786718]
Fine-tuning a pre-trained model on the target data is widely used in many deep learning applications.
Recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy.
We propose a novel selection strategy to select a subset from pre-training data to help improve the generalization on the target task.
arXiv Detail & Related papers (2021-11-24T06:18:32Z) - JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data [86.8949732640035]
We propose JUMBO, an MBO algorithm that sidesteps limitations by querying additional data.
We show that it achieves no-regret under conditions analogous to GP-UCB.
Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems.
arXiv Detail & Related papers (2021-06-02T05:03:38Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.