Agent-based Collaborative Random Search for Hyper-parameter Tuning and
Global Function Optimization
- URL: http://arxiv.org/abs/2303.03394v1
- Date: Fri, 3 Mar 2023 21:10:17 GMT
- Title: Agent-based Collaborative Random Search for Hyper-parameter Tuning and
Global Function Optimization
- Authors: Ahmad Esmaeili, Zahra Ghorrati, Eric T. Matson
- Abstract summary: This paper proposes an agent-based collaborative technique for finding near-optimal values for any arbitrary set of hyper- parameters in a machine learning model.
The behavior of the presented model, specifically against the changes in its design parameters, is investigated in both machine learning and global function optimization applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyper-parameter optimization is one of the most tedious yet crucial steps in
training machine learning models. There are numerous methods for this vital
model-building stage, ranging from domain-specific manual tuning guidelines
suggested by the oracles to the utilization of general-purpose black-box
optimization techniques. This paper proposes an agent-based collaborative
technique for finding near-optimal values for any arbitrary set of
hyper-parameters (or decision variables) in a machine learning model (or
general function optimization problem). The developed method forms a
hierarchical agent-based architecture for the distribution of the searching
operations at different dimensions and employs a cooperative searching
procedure based on an adaptive width-based random sampling technique to locate
the optima. The behavior of the presented model, specifically against the
changes in its design parameters, is investigated in both machine learning and
global function optimization applications, and its performance is compared with
that of two randomized tuning strategies that are commonly used in practice.
According to the empirical results, the proposed model outperformed the
compared methods in the experimented classification, regression, and
multi-dimensional function optimization tasks, notably in a higher number of
dimensions and in the presence of limited on-device computational resources.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - A Survey on Multi-Objective based Parameter Optimization for Deep
Learning [1.3223682837381137]
We focus on exploring the effectiveness of multi-objective optimization strategies for parameter optimization in conjunction with deep neural networks.
The two methods are combined to provide valuable insights into the generation of predictions and analysis in multiple applications.
arXiv Detail & Related papers (2023-05-17T07:48:54Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Hierarchical Collaborative Hyper-parameter Tuning [0.0]
Hyper- parameter tuning is among the most critical stages in building machine learning solutions.
This paper demonstrates how multi-agent systems can be utilized to develop a distributed technique for determining near-optimal values.
arXiv Detail & Related papers (2022-05-11T05:16:57Z) - Consolidated learning -- a domain-specific model-free optimization
strategy with examples for XGBoost and MIMIC-IV [4.370097023410272]
This paper proposes a new formulation of the tuning problem, called consolidated learning.
In such settings, we are interested in the total optimization time rather than tuning for a single task.
We demonstrate the effectiveness of this approach through an empirical study for XGBoost algorithm and the collection of predictive tasks extracted from the MIMIC-IV medical database.
arXiv Detail & Related papers (2022-01-27T21:38:53Z) - Hyper-parameter optimization based on soft actor critic and hierarchical
mixture regularization [5.063728016437489]
We model hyper- parameter optimization process as a Markov decision process, and tackle it with reinforcement learning.
A novel hyper- parameter optimization method based on soft actor critic and hierarchical mixture regularization has been proposed.
arXiv Detail & Related papers (2021-12-08T02:34:43Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - On Hyperparameter Optimization of Machine Learning Algorithms: Theory
and Practice [10.350337750192997]
We introduce several state-of-the-art optimization techniques and discuss how to apply them to machine learning algorithms.
This paper will help industrial users, data analysts, and researchers to better develop machine learning models.
arXiv Detail & Related papers (2020-07-30T21:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.