OptABC: an Optimal Hyperparameter Tuning Approach for Machine Learning
Algorithms
- URL: http://arxiv.org/abs/2112.08511v1
- Date: Wed, 15 Dec 2021 22:33:39 GMT
- Title: OptABC: an Optimal Hyperparameter Tuning Approach for Machine Learning
Algorithms
- Authors: Leila Zahedi, Farid Ghareh Mohammadi, M. Hadi Amini
- Abstract summary: OptABC is proposed to help ABC algorithm in faster convergence toward a near-optimum solution.
OptABC integrates artificial bee colony algorithm, K-Means clustering, greedy algorithm, and opposition-based learning strategy.
Experimental results demonstrate the effectiveness of OptABC compared to existing approaches in the literature.
- Score: 1.6114012813668934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperparameter tuning in machine learning algorithms is a computationally
challenging task due to the large-scale nature of the problem. In order to
develop an efficient strategy for hyper-parameter tuning, one promising
solution is to use swarm intelligence algorithms. Artificial Bee Colony (ABC)
optimization lends itself as a promising and efficient optimization algorithm
for this purpose. However, in some cases, ABC can suffer from a slow
convergence rate or execution time due to the poor initial population of
solutions and expensive objective functions. To address these concerns, a novel
algorithm, OptABC, is proposed to help ABC algorithm in faster convergence
toward a near-optimum solution. OptABC integrates artificial bee colony
algorithm, K-Means clustering, greedy algorithm, and opposition-based learning
strategy for tuning the hyper-parameters of different machine learning models.
OptABC employs these techniques in an attempt to diversify the initial
population, and hence enhance the convergence ability without significantly
decreasing the accuracy. In order to validate the performance of the proposed
method, we compare the results with previous state-of-the-art approaches.
Experimental results demonstrate the effectiveness of the OptABC compared to
existing approaches in the literature.
Related papers
- Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Hybrid ACO-CI Algorithm for Beam Design problems [0.4397520291340694]
A novel hybrid version of the Ant colony optimization (ACO) method is developed using the sample space reduction technique of the Cohort Intelligence (CI) algorithm.
The proposed work could be investigate for real world applications encompassing domains of engineering, and health care problems.
arXiv Detail & Related papers (2023-03-29T04:37:14Z) - Enhancing Machine Learning Model Performance with Hyper Parameter
Optimization: A Comparative Study [0.0]
One of the most critical issues in machine learning is the selection of appropriate hyper parameters for training models.
Hyper parameter optimization (HPO) is a popular topic that artificial intelligence studies have focused on recently.
In this study, classical methods, such as grid, random search and Bayesian optimization, and population-based algorithms, such as genetic algorithms and particle swarm optimization, are discussed.
arXiv Detail & Related papers (2023-02-14T10:12:10Z) - Introductory Studies of Swarm Intelligence Techniques [1.2930503923129208]
Swarm intelligence involves the collective study of individuals and their mutual interactions leading to intelligent behavior of the swarm.
The chapter presents various population-based SI algorithms, their fundamental structures along with their mathematical models.
arXiv Detail & Related papers (2022-09-26T16:29:55Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Improved Binary Artificial Bee Colony Algorithm [0.0]
The Artificial Bee Colony (ABC) algorithm is an evolutionary optimization algorithm based on swarm intelligence.
We improve the ABC algorithm to solve binary optimization problems and call it the improved binary Artificial Bee Colony (ibinABC)
The proposed method consists of an update mechanism based on fitness values and processing different number of decision variables.
arXiv Detail & Related papers (2020-03-12T17:22:52Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.