Multi-Objective Population Based Training
- URL: http://arxiv.org/abs/2306.01436v1
- Date: Fri, 2 Jun 2023 10:54:24 GMT
- Title: Multi-Objective Population Based Training
- Authors: Arkadiy Dushatskiy, Alexander Chebykin, Tanja Alderliesten, Peter A.N.
Bosman
- Abstract summary: Population Based Training (PBT) is an efficient hyperparameter optimization algorithm.
In this work, we introduce a multi-objective version of PBT, MO-PBT.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Population Based Training (PBT) is an efficient hyperparameter optimization
algorithm. PBT is a single-objective algorithm, but many real-world
hyperparameter optimization problems involve two or more conflicting
objectives. In this work, we therefore introduce a multi-objective version of
PBT, MO-PBT. Our experiments on diverse multi-objective hyperparameter
optimization problems (Precision/Recall, Accuracy/Fairness,
Accuracy/Adversarial Robustness) show that MO-PBT outperforms random search,
single-objective PBT, and the state-of-the-art multi-objective hyperparameter
optimization algorithm MO-ASHA.
Related papers
- MBL-CPDP: A Multi-objective Bilevel Method for Cross-Project Defect Prediction via Automated Machine Learning [34.89241736003651]
Cross-project defect prediction (CPDP) leverages machine learning (ML) techniques to proactively identify software defects, especially where project-specific data is scarce.
This paper formulates CPDP as a multi-objective bilevel optimization (MBLO) method, dubbed MBL-CPDP.
It comprises two nested problems: the upper-level, a multi-objective optimization problem, and the lower-level problem, an expensive optimization problem.
arXiv Detail & Related papers (2024-11-10T15:17:15Z) - On the consistency of hyper-parameter selection in value-based deep reinforcement learning [13.133865673667394]
This paper conducts an empirical study focusing on the reliability of hyper- parameter selection for value-based deep reinforcement learning agents.
Our findings help establish which hyper- parameters are most critical to tune, and help clarify which tunings remain consistent across different training regimes.
arXiv Detail & Related papers (2024-06-25T13:06:09Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Speeding Up Multi-Objective Hyperparameter Optimization by Task
Similarity-Based Meta-Learning for the Tree-Structured Parzen Estimator [37.553558410770314]
In this paper, we extend TPE's acquisition function to the meta-learning setting using a task similarity defined by the overlap of top domains between tasks.
In the experiments, we demonstrate that our method speeds up MO-TPE on tabular HPO benchmarks and attains state-of-the-art performance.
arXiv Detail & Related papers (2022-12-13T17:33:02Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - On the Importance of Hyperparameter Optimization for Model-based
Reinforcement Learning [27.36718899899319]
Model-based Reinforcement Learning (MBRL) is a promising framework for learning control in a data-efficient manner.
MBRL typically requires significant human expertise before it can be applied to new problems and domains.
arXiv Detail & Related papers (2021-02-26T18:57:47Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.