Better call Surrogates: A hybrid Evolutionary Algorithm for
Hyperparameter optimization
- URL: http://arxiv.org/abs/2012.06453v1
- Date: Fri, 11 Dec 2020 16:19:59 GMT
- Title: Better call Surrogates: A hybrid Evolutionary Algorithm for
Hyperparameter optimization
- Authors: Subhodip Biswas, Adam D Cobb, Andreea Sistrunk, Naren Ramakrishnan,
Brian Jalaian
- Abstract summary: We propose a surrogate-assisted evolutionary algorithm (EA) for hyper parameter optimization of machine learning (ML) models.
The proposed STEADE model initially estimates the objective function landscape using RadialBasis Function, and then transfers the knowledge to an EA technique called Differential Evolution.
We empirically evaluate our model on the hyper parameter optimization problems as a part of the black box optimization challenge at NeurIPS 2020 and demonstrate the improvement brought about by STEADE over the vanilla EA.
- Score: 18.359749929678635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a surrogate-assisted evolutionary algorithm (EA)
for hyperparameter optimization of machine learning (ML) models. The proposed
STEADE model initially estimates the objective function landscape using
RadialBasis Function interpolation, and then transfers the knowledge to an EA
technique called Differential Evolution that is used to evolve new solutions
guided by a Bayesian optimization framework. We empirically evaluate our model
on the hyperparameter optimization problems as a part of the black box
optimization challenge at NeurIPS 2020 and demonstrate the improvement brought
about by STEADE over the vanilla EA.
Related papers
- An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms [4.0998481751764]
We employ two open-source Large Language Models (LLMs) to analyze the optimization logs online.
We study our approach in the context of step-size adaptation for (1+1)-ES.
arXiv Detail & Related papers (2024-08-05T13:20:41Z) - Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion Models [73.88009808326387]
We propose a novel spectrum-aware adaptation framework for generative models.
Our method adjusts both singular values and their basis vectors of pretrained weights.
We introduce Spectral Ortho Decomposition Adaptation (SODA), which balances computational efficiency and representation capacity.
arXiv Detail & Related papers (2024-05-31T17:43:35Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - Hyper-parameter optimization based on soft actor critic and hierarchical
mixture regularization [5.063728016437489]
We model hyper- parameter optimization process as a Markov decision process, and tackle it with reinforcement learning.
A novel hyper- parameter optimization method based on soft actor critic and hierarchical mixture regularization has been proposed.
arXiv Detail & Related papers (2021-12-08T02:34:43Z) - A self-adapting super-resolution structures framework for automatic
design of GAN [15.351639834230383]
We introduce a new super-resolution image reconstruction generative adversarial network framework.
We use a Bayesian optimization method used to optimize the hyper parameters of the generator and discriminator.
Our method adopts Bayesian optimization as a optimization policy of GAN in our model.
arXiv Detail & Related papers (2021-06-10T19:11:29Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Enhanced Innovized Repair Operator for Evolutionary Multi- and
Many-objective Optimization [5.885238773559015]
"Innovization" is a task of learning common relationships among some or all of the Pareto-optimal (PO) solutions in optimisation problems.
Recent studies have shown that a chronological sequence of non-dominated solutions also possess salient patterns that can be used to learn problem features.
We propose a machine-learning- (ML-) assisted modelling approach that learns the modifications in design variables needed to advance population members towards the Pareto-optimal set.
arXiv Detail & Related papers (2020-11-21T10:29:15Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.