A Collection of Quality Diversity Optimization Problems Derived from
Hyperparameter Optimization of Machine Learning Models
- URL: http://arxiv.org/abs/2204.14061v1
- Date: Thu, 28 Apr 2022 14:29:20 GMT
- Title: A Collection of Quality Diversity Optimization Problems Derived from
Hyperparameter Optimization of Machine Learning Models
- Authors: Lennart Schneider, Florian Pfisterer, Janek Thomas, Bernd Bischl
- Abstract summary: Quality Diversity Optimization generates diverse yet high-performing solutions to a given problem.
Our benchmark problems involve novel feature functions, such as interpretability or resource usage of models.
To allow for fast and efficient benchmarking, we build upon YAHPO Gym, a recently proposed open source benchmarking suite.
- Score: 0.8029049649310213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of Quality Diversity Optimization is to generate a collection of
diverse yet high-performing solutions to a given problem at hand. Typical
benchmark problems are, for example, finding a repertoire of robot arm
configurations or a collection of game playing strategies. In this paper, we
propose a set of Quality Diversity Optimization problems that tackle
hyperparameter optimization of machine learning models - a so far underexplored
application of Quality Diversity Optimization. Our benchmark problems involve
novel feature functions, such as interpretability or resource usage of models.
To allow for fast and efficient benchmarking, we build upon YAHPO Gym, a
recently proposed open source benchmarking suite for hyperparameter
optimization that makes use of high performing surrogate models and returns
these surrogate model predictions instead of evaluating the true expensive
black box function. We present results of an initial experimental study
comparing different Quality Diversity optimizers on our benchmark problems.
Furthermore, we discuss future directions and challenges of Quality Diversity
Optimization in the context of hyperparameter optimization.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Multi-Objective Optimization of Performance and Interpretability of
Tabular Supervised Machine Learning Models [0.9023847175654603]
Interpretability is quantified via three measures: feature sparsity, interaction sparsity of features, and sparsity of non-monotone feature effects.
We show that our framework is capable of finding diverse models that are highly competitive or outperform state-of-the-art XGBoost or Explainable Boosting Machine models.
arXiv Detail & Related papers (2023-07-17T00:07:52Z) - Agent-based Collaborative Random Search for Hyper-parameter Tuning and
Global Function Optimization [0.0]
This paper proposes an agent-based collaborative technique for finding near-optimal values for any arbitrary set of hyper- parameters in a machine learning model.
The behavior of the presented model, specifically against the changes in its design parameters, is investigated in both machine learning and global function optimization applications.
arXiv Detail & Related papers (2023-03-03T21:10:17Z) - Uncertainty-Aware Search Framework for Multi-Objective Bayesian
Optimization [40.40632890861706]
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation.
arXiv Detail & Related papers (2022-04-12T16:50:48Z) - Hyper-parameter optimization based on soft actor critic and hierarchical
mixture regularization [5.063728016437489]
We model hyper- parameter optimization process as a Markov decision process, and tackle it with reinforcement learning.
A novel hyper- parameter optimization method based on soft actor critic and hierarchical mixture regularization has been proposed.
arXiv Detail & Related papers (2021-12-08T02:34:43Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.