HyperTuner: A Cross-Layer Multi-Objective Hyperparameter Auto-Tuning
Framework for Data Analytic Services
- URL: http://arxiv.org/abs/2304.10051v1
- Date: Thu, 20 Apr 2023 02:19:10 GMT
- Title: HyperTuner: A Cross-Layer Multi-Objective Hyperparameter Auto-Tuning
Framework for Data Analytic Services
- Authors: Hui Dou, Shanshan Zhu, Yiwen Zhang, Pengfei Chen and Zibin Zheng
- Abstract summary: We propose HyperTuner to execute cross-layer multi-objective hyperparameter auto-tuning.
We show that HyperTuner is superior in both convergence and diversity compared with the other four baseline algorithms.
experiments with different training datasets, different optimization objectives and different machine learning platforms verify that HyperTuner can well adapt to various data analytic service scenarios.
- Score: 25.889791254011794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyper-parameters optimization (HPO) is vital for machine learning models.
Besides model accuracy, other tuning intentions such as model training time and
energy consumption are also worthy of attention from data analytic service
providers. Hence, it is essential to take both model hyperparameters and system
parameters into consideration to execute cross-layer multi-objective
hyperparameter auto-tuning. Towards this challenging target, we propose
HyperTuner in this paper. To address the formulated high-dimensional black-box
multi-objective optimization problem, HyperTuner first conducts multi-objective
parameter importance ranking with its MOPIR algorithm and then leverages the
proposed ADUMBO algorithm to find the Pareto-optimal configuration set. During
each iteration, ADUMBO selects the most promising configuration from the
generated Pareto candidate set via maximizing a new well-designed metric, which
can adaptively leverage the uncertainty as well as the predicted mean across
all the surrogate models along with the iteration times. We evaluate HyperTuner
on our local distributed TensorFlow cluster and experimental results show that
it is always able to find a better Pareto configuration front superior in both
convergence and diversity compared with the other four baseline algorithms.
Besides, experiments with different training datasets, different optimization
objectives and different machine learning platforms verify that HyperTuner can
well adapt to various data analytic service scenarios.
Related papers
- Fairer and More Accurate Tabular Models Through NAS [14.147928131445852]
We propose using multi-objective Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO) in the first application to the very challenging domain of tabular data.
We show that models optimized solely for accuracy with NAS often fail to inherently address fairness concerns.
We produce architectures that consistently dominate state-of-the-art bias mitigation methods either in fairness, accuracy or both.
arXiv Detail & Related papers (2023-10-18T17:56:24Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Fair and Green Hyperparameter Optimization via Multi-objective and
Multiple Information Source Bayesian Optimization [0.19116784879310028]
FanG-HPO uses subsets of the large dataset (aka information sources) to obtain cheap approximations of both accuracy and fairness.
Experiments consider two benchmark (fairness) datasets and two machine learning algorithms.
arXiv Detail & Related papers (2022-05-18T10:07:21Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Consolidated learning -- a domain-specific model-free optimization
strategy with examples for XGBoost and MIMIC-IV [4.370097023410272]
This paper proposes a new formulation of the tuning problem, called consolidated learning.
In such settings, we are interested in the total optimization time rather than tuning for a single task.
We demonstrate the effectiveness of this approach through an empirical study for XGBoost algorithm and the collection of predictive tasks extracted from the MIMIC-IV medical database.
arXiv Detail & Related papers (2022-01-27T21:38:53Z) - Towards Robust and Automatic Hyper-Parameter Tunning [39.04604349338802]
We introduce a new class of HPO method and explore how the low-rank factorization of intermediate layers of a convolutional network can be used to define an analytical response surface.
We quantify how this surface behaves as a surrogate to model performance and can be solved using a trust-region search algorithm, which we call autoHyper.
arXiv Detail & Related papers (2021-11-28T05:27:34Z) - Amortized Auto-Tuning: Cost-Efficient Transfer Optimization for
Hyperparameter Recommendation [83.85021205445662]
We propose an instantiation--amortized auto-tuning (AT2) to speed up tuning of machine learning models.
We conduct a thorough analysis of the multi-task multi-fidelity Bayesian optimization framework, which leads to the best instantiation--amortized auto-tuning (AT2)
arXiv Detail & Related papers (2021-06-17T00:01:18Z) - Rethinking the Hyperparameters for Fine-tuning [78.15505286781293]
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks.
Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper parameters.
This paper re-examines several common practices of setting hyper parameters for fine-tuning.
arXiv Detail & Related papers (2020-02-19T18:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.