Efficient Automatic CASH via Rising Bandits
- URL: http://arxiv.org/abs/2012.04371v1
- Date: Tue, 8 Dec 2020 11:29:57 GMT
- Title: Efficient Automatic CASH via Rising Bandits
- Authors: Yang Li, Jiawei Jiang, Jinyang Gao, Yingxia Shao, Ce Zhang, Bin Cui
- Abstract summary: We propose an alternating optimization framework for CASH problem.
We also introduce Rising Bandits, a CASH-oriented Multi-Armed Bandits (MAB) variant, to model the algorithm selection in CASH.
This framework can take the advantages of both BO in solving the HPO problem and the MABs in accelerating the algorithm selection.
- Score: 37.09843193057032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Combined Algorithm Selection and Hyperparameter optimization (CASH) is
one of the most fundamental problems in Automatic Machine Learning (AutoML).
The existing Bayesian optimization (BO) based solutions turn the CASH problem
into a Hyperparameter Optimization (HPO) problem by combining the
hyperparameters of all machine learning (ML) algorithms, and use BO methods to
solve it. As a result, these methods suffer from the low-efficiency problem due
to the huge hyperparameter space in CASH. To alleviate this issue, we propose
the alternating optimization framework, where the HPO problem for each ML
algorithm and the algorithm selection problem are optimized alternately. In
this framework, the BO methods are used to solve the HPO problem for each ML
algorithm separately, incorporating a much smaller hyperparameter space for BO
methods. Furthermore, we introduce Rising Bandits, a CASH-oriented Multi-Armed
Bandits (MAB) variant, to model the algorithm selection in CASH. This framework
can take the advantages of both BO in solving the HPO problem with a relatively
small hyperparameter space and the MABs in accelerating the algorithm
selection. Moreover, we further develop an efficient online algorithm to solve
the Rising Bandits with provably theoretical guarantees. The extensive
experiments on 30 OpenML datasets demonstrate the superiority of the proposed
approach over the competitive baselines.
Related papers
- Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - OptABC: an Optimal Hyperparameter Tuning Approach for Machine Learning
Algorithms [1.6114012813668934]
OptABC is proposed to help ABC algorithm in faster convergence toward a near-optimum solution.
OptABC integrates artificial bee colony algorithm, K-Means clustering, greedy algorithm, and opposition-based learning strategy.
Experimental results demonstrate the effectiveness of OptABC compared to existing approaches in the literature.
arXiv Detail & Related papers (2021-12-15T22:33:39Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - HyP-ABC: A Novel Automated Hyper-Parameter Tuning Algorithm Using
Evolutionary Optimization [1.6114012813668934]
We propose HyP-ABC, an automatic hybrid hyper-parameter optimization algorithm using the modified artificial bee colony approach.
Compared to the state-of-the-art techniques, HyP-ABC is more efficient and has a limited number of parameters to be tuned.
arXiv Detail & Related papers (2021-09-11T16:45:39Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - A Hybrid Framework Using a QUBO Solver For Permutation-Based
Combinatorial Optimization [5.460573052311485]
We propose a hybrid framework to solve large-scale permutation-based problems using a high-performance quadratic unconstrained binary optimization solver.
We propose techniques to overcome the challenges in using a QUBO solver that typically comes with limited numbers of bits.
arXiv Detail & Related papers (2020-09-27T07:15:25Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Frugal Optimization for Cost-related Hyperparameters [43.599155206275306]
We develop a new cost-frugal HPO solution for machine learning algorithms.
We prove a convergence rate of $O(fracsqrtdsqrtK)$ and an $O(depsilon-2)$-approximation guarantee on the total cost.
We provide strong empirical results in comparison with state-of-the-art HPO methods on large AutoML benchmarks.
arXiv Detail & Related papers (2020-05-04T15:40:44Z) - Reinforcement Learning Enhanced Quantum-inspired Algorithm for
Combinatorial Optimization [0.0]
We use a reinforcement learning agent in conjunction with a quantum-inspired algorithm to solve the Ising energy minimization problem.
We propose a new Rescaled Ranked Reward (R3) method that enables stable single-player version of self-play training.
arXiv Detail & Related papers (2020-02-11T20:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.