Hyperparameter Adaptive Search for Surrogate Optimization: A
Self-Adjusting Approach
- URL: http://arxiv.org/abs/2310.07970v1
- Date: Thu, 12 Oct 2023 01:26:05 GMT
- Title: Hyperparameter Adaptive Search for Surrogate Optimization: A
Self-Adjusting Approach
- Authors: Nazanin Nezami and Hadis Anahideh
- Abstract summary: Surrogate optimization (SO) algorithms have shown promise for optimizing expensive black-box functions.
Our approach identifies and modifies the most influential hyper parameters specific to each problem and SO approach.
Experimental results demonstrate the effectiveness of HASSO in enhancing the performance of various SO algorithms.
- Score: 1.6317061277457001
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Surrogate Optimization (SO) algorithms have shown promise for optimizing
expensive black-box functions. However, their performance is heavily influenced
by hyperparameters related to sampling and surrogate fitting, which poses a
challenge to their widespread adoption. We investigate the impact of
hyperparameters on various SO algorithms and propose a Hyperparameter Adaptive
Search for SO (HASSO) approach. HASSO is not a hyperparameter tuning algorithm,
but a generic self-adjusting SO algorithm that dynamically tunes its own
hyperparameters while concurrently optimizing the primary objective function,
without requiring additional evaluations. The aim is to improve the
accessibility, effectiveness, and convergence speed of SO algorithms for
practitioners. Our approach identifies and modifies the most influential
hyperparameters specific to each problem and SO approach, reducing the need for
manual tuning without significantly increasing the computational burden.
Experimental results demonstrate the effectiveness of HASSO in enhancing the
performance of various SO algorithms across different global optimization test
problems.
Related papers
- MADA: Meta-Adaptive Optimizers through hyper-gradient Descent [73.1383658672682]
We introduce Meta-Adaptives (MADA), a unified framework that can generalize several known convergences and dynamically learn the most suitable one during training.
We empirically compare MADA to other populars on vision and language tasks, and find that MADA consistently outperforms Adam and other populars.
We also propose AVGrad, a modification of AMS that replaces the maximum operator with averaging, which is more suitable for hyper-gradient optimization.
arXiv Detail & Related papers (2024-01-17T00:16:46Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Performance comparison of optimization methods on variational quantum
algorithms [2.690135599539986]
Variational quantum algorithms (VQAs) offer a promising path towards using near-term quantum hardware for applications in academic and industrial research.
We study the performance of four commonly used gradient-free optimization methods: SLSQP, COBYLA, CMA-ES, and SPSA.
arXiv Detail & Related papers (2021-11-26T12:13:20Z) - HyP-ABC: A Novel Automated Hyper-Parameter Tuning Algorithm Using
Evolutionary Optimization [1.6114012813668934]
We propose HyP-ABC, an automatic hybrid hyper-parameter optimization algorithm using the modified artificial bee colony approach.
Compared to the state-of-the-art techniques, HyP-ABC is more efficient and has a limited number of parameters to be tuned.
arXiv Detail & Related papers (2021-09-11T16:45:39Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - An Asymptotically Optimal Multi-Armed Bandit Algorithm and
Hyperparameter Optimization [48.5614138038673]
We propose an efficient and robust bandit-based algorithm called Sub-Sampling (SS) in the scenario of hyper parameter search evaluation.
We also develop a novel hyper parameter optimization algorithm called BOSS.
Empirical studies validate our theoretical arguments of SS and demonstrate the superior performance of BOSS on a number of applications.
arXiv Detail & Related papers (2020-07-11T03:15:21Z) - Automatic Setting of DNN Hyper-Parameters by Mixing Bayesian
Optimization and Tuning Rules [0.6875312133832078]
We build a new algorithm for evaluating and analyzing the results of the network on the training and validation sets.
We use a set of tuning rules to add new hyper-parameters and/or to reduce the hyper- parameter search space to select a better combination.
arXiv Detail & Related papers (2020-06-03T08:53:48Z) - On Hyper-parameter Tuning for Stochastic Optimization Algorithms [28.88646928299302]
This paper proposes the first-ever algorithmic framework for tuning hyper-parameters of optimization algorithm based on reinforcement learning.
The proposed framework can be used as a standard tool for hyper-parameter tuning in algorithms.
arXiv Detail & Related papers (2020-03-04T12:29:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.