IEO: Intelligent Evolutionary Optimisation for Hyperparameter Tuning
- URL: http://arxiv.org/abs/2009.06390v1
- Date: Thu, 10 Sep 2020 18:47:04 GMT
- Title: IEO: Intelligent Evolutionary Optimisation for Hyperparameter Tuning
- Authors: Yuxi Huan, Fan Wu, Michail Basios, Leslie Kanthan, Lingbo Li, Baowen
Xu
- Abstract summary: We introduce an intelligent evolutionary optimisation algorithm which applies machine learning technique to the traditional evolutionary algorithm.
Our approach accelerates the optimisation speed by 30.40% on average and up to 77.06% in the best scenarios.
- Score: 9.082096472600751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperparameter optimisation is a crucial process in searching the optimal
machine learning model. The efficiency of finding the optimal hyperparameter
settings has been a big concern in recent researches since the optimisation
process could be time-consuming, especially when the objective functions are
highly expensive to evaluate. In this paper, we introduce an intelligent
evolutionary optimisation algorithm which applies machine learning technique to
the traditional evolutionary algorithm to accelerate the overall optimisation
process of tuning machine learning models in classification problems. We
demonstrate our Intelligent Evolutionary Optimisation (IEO)in a series of
controlled experiments, comparing with traditional evolutionary optimisation in
hyperparameter tuning. The empirical study shows that our approach accelerates
the optimisation speed by 30.40% on average and up to 77.06% in the best
scenarios.
Related papers
- An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms [4.0998481751764]
We employ two open-source Large Language Models (LLMs) to analyze the optimization logs online.
We study our approach in the context of step-size adaptation for (1+1)-ES.
arXiv Detail & Related papers (2024-08-05T13:20:41Z) - VeLO: Training Versatile Learned Optimizers by Scaling Up [67.90237498659397]
We leverage the same scaling approach behind the success of deep learning to learn versatiles.
We train an ingest for deep learning which is itself a small neural network that ingests and outputs parameter updates.
We open source our learned, meta-training code, the associated train test data, and an extensive benchmark suite with baselines at velo-code.io.
arXiv Detail & Related papers (2022-11-17T18:39:07Z) - A Data-Driven Evolutionary Transfer Optimization for Expensive Problems
in Dynamic Environments [9.098403098464704]
Data-driven, a.k.a. surrogate-assisted, evolutionary optimization has been recognized as an effective approach for tackling expensive black-box optimization problems.
This paper proposes a simple but effective transfer learning framework to empower data-driven evolutionary optimization to solve dynamic optimization problems.
Experiments on synthetic benchmark test problems and a real-world case study demonstrate the effectiveness of our proposed algorithm.
arXiv Detail & Related papers (2022-11-05T11:19:50Z) - Improving Multi-fidelity Optimization with a Recurring Learning Rate for
Hyperparameter Tuning [7.591442522626255]
We propose Multi-fidelity Optimization with a Recurring Learning rate (MORL)
MORL incorporates CNNs' optimization process into multi-fidelity optimization.
It alleviates the problem of slow-starter and achieves a more precise low-fidelity approximation.
arXiv Detail & Related papers (2022-09-26T08:16:31Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Adaptive Optimizer for Automated Hyperparameter Optimization Problem [0.0]
In this paper, we present a general framework that is able to construct an adaptive framework, which automatically adjust the appropriate parameters in the process of optimization.
arXiv Detail & Related papers (2022-01-28T13:58:10Z) - Hyper-parameter optimization based on soft actor critic and hierarchical
mixture regularization [5.063728016437489]
We model hyper- parameter optimization process as a Markov decision process, and tackle it with reinforcement learning.
A novel hyper- parameter optimization method based on soft actor critic and hierarchical mixture regularization has been proposed.
arXiv Detail & Related papers (2021-12-08T02:34:43Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.