Search Algorithms for Automated Hyper-Parameter Tuning
- URL: http://arxiv.org/abs/2104.14677v1
- Date: Thu, 29 Apr 2021 22:11:52 GMT
- Title: Search Algorithms for Automated Hyper-Parameter Tuning
- Authors: Leila Zahedi, Farid Ghareh Mohammadi, Shabnam Rezapour, Matthew W.
Ohland, M. Hadi Amini
- Abstract summary: We develop two automated Hyper- Optimization methods, namely grid search and random search, to assess and improve a previous study's performance.
Experiment results show that applying random search and grid search on machine learning algorithms improves accuracy.
- Score: 1.2233362977312945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning is a powerful method for modeling in different fields such
as education. Its capability to accurately predict students' success makes it
an ideal tool for decision-making tasks related to higher education. The
accuracy of machine learning models depends on selecting the proper
hyper-parameters. However, it is not an easy task because it requires time and
expertise to tune the hyper-parameters to fit the machine learning model. In
this paper, we examine the effectiveness of automated hyper-parameter tuning
techniques to the realm of students' success. Therefore, we develop two
automated Hyper-Parameter Optimization methods, namely grid search and random
search, to assess and improve a previous study's performance. The experiment
results show that applying random search and grid search on machine learning
algorithms improves accuracy. We empirically show automated methods'
superiority on real-world educational data (MIDFIELD) for tuning HPs of
conventional machine learning classifiers. This work emphasizes the
effectiveness of automated hyper-parameter optimization while applying machine
learning in the education field to aid faculties, directors', or non-expert
users' decisions to improve students' success.
Related papers
- Simulation-Aided Policy Tuning for Black-Box Robot Learning [47.83474891747279]
We present a novel black-box policy search algorithm focused on data-efficient policy improvements.
The algorithm learns directly on the robot and treats simulation as an additional information source to speed up the learning process.
We show fast and successful task learning on a robot manipulator with the aid of an imperfect simulator.
arXiv Detail & Related papers (2024-11-21T15:52:23Z) - Hyperparameter Optimization in Machine Learning [34.356747514732966]
Hyperparameters are configuration variables controlling the behavior of machine learning algorithms.
The choice of their values determine the effectiveness of systems based on these technologies.
We present a unified treatment of hyperparameter optimization, providing the reader with examples and insights into the state-of-the-art.
arXiv Detail & Related papers (2024-10-30T09:39:22Z) - Scrap Your Schedules with PopDescent [0.0]
Population Descent (PopDescent) is a memetic, population-based search technique.
We show that PopDescent converges faster than existing search methods, finding model parameters with test-loss values up to 18% lower.
Our trials on standard machine learning vision tasks show that PopDescent converges faster than existing search methods, finding model parameters with test-loss values up to 18% lower.
arXiv Detail & Related papers (2023-10-23T08:11:17Z) - Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning [72.83293818245978]
We design and learn a neural network (NN)-based auto-tuner for hyper- parameter tuning in sparse Bayesian learning.
We show that considerable improvement in convergence rate and recovery performance can be achieved.
arXiv Detail & Related papers (2022-11-09T12:34:59Z) - AdaGrid: Adaptive Grid Search for Link Prediction Training Objective [58.79804082133998]
Training objective crucially influences the model's performance and generalization capabilities.
We propose Adaptive Grid Search (AdaGrid) which dynamically adjusts the edge message ratio during training.
We show that AdaGrid can boost the performance of the models up to $1.9%$ while being nine times more time-efficient than a complete search.
arXiv Detail & Related papers (2022-03-30T09:24:17Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - To tune or not to tune? An Approach for Recommending Important
Hyperparameters [2.121963121603413]
We consider building the relationship between the performance of the machine learning models and their hyperparameters to discover the trend and gain insights.
Our results enable users to decide whether it is worth conducting a possibly time-consuming tuning strategy.
arXiv Detail & Related papers (2021-08-30T08:54:58Z) - Experimental Investigation and Evaluation of Model-based Hyperparameter
Optimization [0.3058685580689604]
This article presents an overview of theoretical and practical results for popular machine learning algorithms.
The R package mlr is used as a uniform interface to the machine learning models.
arXiv Detail & Related papers (2021-07-19T11:37:37Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - On Hyperparameter Optimization of Machine Learning Algorithms: Theory
and Practice [10.350337750192997]
We introduce several state-of-the-art optimization techniques and discuss how to apply them to machine learning algorithms.
This paper will help industrial users, data analysts, and researchers to better develop machine learning models.
arXiv Detail & Related papers (2020-07-30T21:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.