Hybrid Algorithm Selection and Hyperparameter Tuning on Distributed
Machine Learning Resources: A Hierarchical Agent-based Approach
- URL: http://arxiv.org/abs/2309.06604v2
- Date: Thu, 14 Sep 2023 00:49:47 GMT
- Title: Hybrid Algorithm Selection and Hyperparameter Tuning on Distributed
Machine Learning Resources: A Hierarchical Agent-based Approach
- Authors: Ahmad Esmaeili, Julia T. Rayz, Eric T. Matson
- Abstract summary: This paper proposes a fully automatic and collaborative agent-based mechanism for selecting distributedly organized machine learning algorithms.
Our solution is totally correct and exhibits linear time and space complexity in relation to the size of available resources.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithm selection and hyperparameter tuning are critical steps in both
academic and applied machine learning. On the other hand, these steps are
becoming ever increasingly delicate due to the extensive rise in the number,
diversity, and distributedness of machine learning resources. Multi-agent
systems, when applied to the design of machine learning platforms, bring about
several distinctive characteristics such as scalability, flexibility, and
robustness, just to name a few. This paper proposes a fully automatic and
collaborative agent-based mechanism for selecting distributedly organized
machine learning algorithms and simultaneously tuning their hyperparameters.
Our method builds upon an existing agent-based hierarchical machine-learning
platform and augments its query structure to support the aforementioned
functionalities without being limited to specific learning, selection, and
tuning mechanisms. We have conducted theoretical assessments, formal
verification, and analytical study to demonstrate the correctness, resource
utilization, and computational efficiency of our technique. According to the
results, our solution is totally correct and exhibits linear time and space
complexity in relation to the size of available resources. To provide concrete
examples of how the proposed methodologies can effectively adapt and perform
across a range of algorithmic options and datasets, we have also conducted a
series of experiments using a system comprised of 24 algorithms and 9 datasets.
Related papers
- Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Multi-Dimensional Ability Diagnosis for Machine Learning Algorithms [88.93372675846123]
We propose a task-agnostic evaluation framework Camilla for evaluating machine learning algorithms.
We use cognitive diagnosis assumptions and neural networks to learn the complex interactions among algorithms, samples and the skills of each sample.
In our experiments, Camilla outperforms state-of-the-art baselines on the metric reliability, rank consistency and rank stability.
arXiv Detail & Related papers (2023-07-14T03:15:56Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Hierarchical Collaborative Hyper-parameter Tuning [0.0]
Hyper- parameter tuning is among the most critical stages in building machine learning solutions.
This paper demonstrates how multi-agent systems can be utilized to develop a distributed technique for determining near-optimal values.
arXiv Detail & Related papers (2022-05-11T05:16:57Z) - Adaptive Discretization in Online Reinforcement Learning [9.560980936110234]
Two major questions in designing discretization-based algorithms are how to create the discretization and when to refine it.
We provide a unified theoretical analysis of tree-based hierarchical partitioning methods for online reinforcement learning.
Our algorithms are easily adapted to operating constraints, and our theory provides explicit bounds across each of the three facets.
arXiv Detail & Related papers (2021-10-29T15:06:15Z) - Automated Evolutionary Approach for the Design of Composite Machine
Learning Pipelines [48.7576911714538]
The proposed approach is aimed to automate the design of composite machine learning pipelines.
It designs the pipelines with a customizable graph-based structure, analyzes the obtained results, and reproduces them.
The software implementation on this approach is presented as an open-source framework.
arXiv Detail & Related papers (2021-06-26T23:19:06Z) - HAMLET: A Hierarchical Agent-based Machine Learning Platform [0.0]
HAMLET (Hierarchical Agent-based Machine LEarning plaTform) is a hybrid machine learning platform based on hierarchical multi-agent systems.
The proposed system models a machine learning solutions as a hypergraph and autonomously sets up a multi-level structure of heterogeneous agents.
arXiv Detail & Related papers (2020-10-10T03:46:59Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - On Hyperparameter Optimization of Machine Learning Algorithms: Theory
and Practice [10.350337750192997]
We introduce several state-of-the-art optimization techniques and discuss how to apply them to machine learning algorithms.
This paper will help industrial users, data analysts, and researchers to better develop machine learning models.
arXiv Detail & Related papers (2020-07-30T21:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.