Transfer Learning based Search Space Design for Hyperparameter Tuning
- URL: http://arxiv.org/abs/2206.02511v1
- Date: Mon, 6 Jun 2022 11:48:58 GMT
- Title: Transfer Learning based Search Space Design for Hyperparameter Tuning
- Authors: Yang Li, Yu Shen, Huaijun Jiang, Tianyi Bai, Wentao Zhang, Ce Zhang
and Bin Cui
- Abstract summary: We introduce an automatic method to design the BO search space with the aid of tuning history from past tasks.
This simple yet effective approach can be used to endow many existing BO methods with transfer learning capabilities.
- Score: 31.96809688536572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The tuning of hyperparameters becomes increasingly important as machine
learning (ML) models have been extensively applied in data mining applications.
Among various approaches, Bayesian optimization (BO) is a successful
methodology to tune hyper-parameters automatically. While traditional methods
optimize each tuning task in isolation, there has been recent interest in
speeding up BO by transferring knowledge across previous tasks. In this work,
we introduce an automatic method to design the BO search space with the aid of
tuning history from past tasks. This simple yet effective approach can be used
to endow many existing BO methods with transfer learning capabilities. In
addition, it enjoys the three advantages: universality, generality, and
safeness. The extensive experiments show that our approach considerably boosts
BO by designing a promising and compact search space instead of using the
entire space, and outperforms the state-of-the-arts on a wide range of
benchmarks, including machine learning and deep learning tuning tasks, and
neural architecture search.
Related papers
- Deep Memory Search: A Metaheuristic Approach for Optimizing Heuristic Search [0.0]
We introduce a novel approach called Deep Heuristic Search (DHS), which models metaheuristic search as a memory-driven process.
DHS employs multiple search layers and memory-based exploration-exploitation mechanisms to navigate large, dynamic search spaces.
arXiv Detail & Related papers (2024-10-22T14:16:49Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - HyperBO+: Pre-training a universal prior for Bayesian optimization with
hierarchical Gaussian processes [7.963551878308098]
HyperBO+ is a pre-training approach for hierarchical Gaussian processes.
We show that HyperBO+ is able to generalize to unseen search spaces and achieves lower regrets than competitive baselines.
arXiv Detail & Related papers (2022-12-20T18:47:10Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Visual-Language Navigation Pretraining via Prompt-based Environmental
Self-exploration [83.96729205383501]
We introduce prompt-based learning to achieve fast adaptation for language embeddings.
Our model can adapt to diverse vision-language navigation tasks, including VLN and REVERIE.
arXiv Detail & Related papers (2022-03-08T11:01:24Z) - Automatic tuning of hyper-parameters of reinforcement learning
algorithms using Bayesian optimization with behavioral cloning [0.0]
In reinforcement learning (RL), the information content of data gathered by the learning agent is dependent on the setting of many hyper- parameters.
In this work, a novel approach for autonomous hyper- parameter setting using Bayesian optimization is proposed.
Experiments reveal promising results compared to other manual tweaking and optimization-based approaches.
arXiv Detail & Related papers (2021-12-15T13:10:44Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - Bayesian Optimization and Deep Learning forsteering wheel angle
prediction [58.720142291102135]
This work aims to obtain an accurate model for the prediction of the steering angle in an automated driving system.
BO was able to identify, within a limited number of trials, a model -- namely BOST-LSTM -- which resulted, the most accurate when compared to classical end-to-end driving models.
arXiv Detail & Related papers (2021-10-22T15:25:14Z) - Amortized Auto-Tuning: Cost-Efficient Transfer Optimization for
Hyperparameter Recommendation [83.85021205445662]
We propose an instantiation--amortized auto-tuning (AT2) to speed up tuning of machine learning models.
We conduct a thorough analysis of the multi-task multi-fidelity Bayesian optimization framework, which leads to the best instantiation--amortized auto-tuning (AT2)
arXiv Detail & Related papers (2021-06-17T00:01:18Z) - Hyperparameter Transfer Learning with Adaptive Complexity [5.695163312473305]
We propose a new multi-task BO method that learns a set of ordered, non-linear basis functions of increasing complexity via nested drop-out and automatic relevance determination.
arXiv Detail & Related papers (2021-02-25T12:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.