HyP-ABC: A Novel Automated Hyper-Parameter Tuning Algorithm Using
Evolutionary Optimization
- URL: http://arxiv.org/abs/2109.05319v1
- Date: Sat, 11 Sep 2021 16:45:39 GMT
- Title: HyP-ABC: A Novel Automated Hyper-Parameter Tuning Algorithm Using
Evolutionary Optimization
- Authors: Leila Zahedi, Farid Ghareh Mohammadi, M. Hadi Amini
- Abstract summary: We propose HyP-ABC, an automatic hybrid hyper-parameter optimization algorithm using the modified artificial bee colony approach.
Compared to the state-of-the-art techniques, HyP-ABC is more efficient and has a limited number of parameters to be tuned.
- Score: 1.6114012813668934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning techniques lend themselves as promising decision-making and
analytic tools in a wide range of applications. Different ML algorithms have
various hyper-parameters. In order to tailor an ML model towards a specific
application, a large number of hyper-parameters should be tuned. Tuning the
hyper-parameters directly affects the performance (accuracy and run-time).
However, for large-scale search spaces, efficiently exploring the ample number
of combinations of hyper-parameters is computationally challenging. Existing
automated hyper-parameter tuning techniques suffer from high time complexity.
In this paper, we propose HyP-ABC, an automatic innovative hybrid
hyper-parameter optimization algorithm using the modified artificial bee colony
approach, to measure the classification accuracy of three ML algorithms, namely
random forest, extreme gradient boosting, and support vector machine. Compared
to the state-of-the-art techniques, HyP-ABC is more efficient and has a limited
number of parameters to be tuned, making it worthwhile for real-world
hyper-parameter optimization problems. We further compare our proposed HyP-ABC
algorithm with state-of-the-art techniques. In order to ensure the robustness
of the proposed method, the algorithm takes a wide range of feasible
hyper-parameter values, and is tested using a real-world educational dataset.
Related papers
- A Linear Programming Enhanced Genetic Algorithm for Hyperparameter Tuning in Machine Learning [0.34530027457862006]
In this paper, we formulate the hyperparameter tuning problem in machine learning as a bilevel program.
The bilevel program is solved using a micro genetic algorithm that is enhanced with a linear program.
We test the performance of the proposed approach on two datasets, MNIST and CIFAR-10.
arXiv Detail & Related papers (2024-06-30T07:11:00Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - HyperNP: Interactive Visual Exploration of Multidimensional Projection
Hyperparameters [61.354362652006834]
HyperNP is a scalable method that allows for real-time interactive exploration of projection methods by training neural network approximations.
We evaluate the performance of the HyperNP across three datasets in terms of performance and speed.
arXiv Detail & Related papers (2021-06-25T17:28:14Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Automatic Setting of DNN Hyper-Parameters by Mixing Bayesian
Optimization and Tuning Rules [0.6875312133832078]
We build a new algorithm for evaluating and analyzing the results of the network on the training and validation sets.
We use a set of tuning rules to add new hyper-parameters and/or to reduce the hyper- parameter search space to select a better combination.
arXiv Detail & Related papers (2020-06-03T08:53:48Z) - Weighted Random Search for CNN Hyperparameter Optimization [0.0]
We introduce the weighted Random Search (WRS) method, a combination of Random Search (RS) and probabilistic greedy.
The criterion is the classification accuracy achieved within the same number of tested combinations of hyperparameter values.
According to our experiments, the WRS algorithm outperforms the other methods.
arXiv Detail & Related papers (2020-03-30T09:40:14Z) - Automatic Hyper-Parameter Optimization Based on Mapping Discovery from
Data to Hyper-Parameters [3.37314595161109]
We propose an efficient automatic parameter optimization approach, which is based on the mapping from data to the corresponding hyper- parameters.
We show that the proposed approaches outperform the state-of-the-art apporaches significantly.
arXiv Detail & Related papers (2020-03-03T19:26:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.