FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization
- URL: http://arxiv.org/abs/2206.03966v2
- Date: Fri, 10 Jun 2022 06:04:48 GMT
- Title: FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization
- Authors: Zhen Wang, Weirui Kuang, Ce Zhang, Bolin Ding, Yaliang Li
- Abstract summary: We propose and implement a benchmark suite FedHPO-B that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions.
We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods.
- Score: 50.12374973760274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperparameter optimization (HPO) is crucial for machine learning algorithms
to achieve satisfactory performance, whose progress has been boosted by related
benchmarks. Nonetheless, existing efforts in benchmarking all focus on HPO for
traditional centralized learning while ignoring federated learning (FL), a
promising paradigm for collaboratively learning models from dispersed data. In
this paper, we first identify some uniqueness of HPO for FL algorithms from
various aspects. Due to this uniqueness, existing HPO benchmarks no longer
satisfy the need to compare HPO methods in the FL setting. To facilitate the
research of HPO in the FL setting, we propose and implement a benchmark suite
FedHPO-B that incorporates comprehensive FL tasks, enables efficient function
evaluations, and eases continuing extensions. We also conduct extensive
experiments based on FedHPO-B to benchmark a few HPO methods. We open-source
FedHPO-B at
https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOB.
Related papers
- Hierarchical Preference Optimization: Learning to achieve goals via feasible subgoals prediction [71.81851971324187]
This work introduces Hierarchical Preference Optimization (HPO), a novel approach to hierarchical reinforcement learning (HRL)
HPO addresses non-stationarity and infeasible subgoal generation issues when solving complex robotic control tasks.
Experiments on challenging robotic navigation and manipulation tasks demonstrate impressive performance of HPO, where it shows an improvement of up to 35% over the baselines.
arXiv Detail & Related papers (2024-11-01T04:58:40Z) - From $r$ to $Q^*$: Your Language Model is Secretly a Q-Function [50.812404038684505]
We show that we can derive DPO in the token-level MDP as a general inverse Q-learning algorithm, which satisfies the Bellman equation.
We discuss applications of our work, including information elicitation in multi-turn dialogue, reasoning, agentic applications and end-to-end training of multi-model systems.
arXiv Detail & Related papers (2024-04-18T17:37:02Z) - PriorBand: Practical Hyperparameter Optimization in the Age of Deep
Learning [49.92394599459274]
We propose PriorBand, an HPO algorithm tailored to Deep Learning (DL) pipelines.
We show its robustness across a range of DL benchmarks and show its gains under informative expert input and against poor expert beliefs.
arXiv Detail & Related papers (2023-06-21T16:26:14Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Single-shot Hyper-parameter Optimization for Federated Learning: A
General Algorithm & Analysis [20.98323380319439]
We introduce Federated Loss SuRface Aggregation (FLoRA), a general FL-HPO solution framework.
FLoRA enables single-shot FL-HPO: identifying a single set of good hyper- parameters that are subsequently used in a single FL training.
Our empirical evaluation of FLoRA for multiple ML algorithms on seven OpenML datasets demonstrates significant model accuracy improvements over the considered baseline.
arXiv Detail & Related papers (2022-02-16T21:14:34Z) - FLoRA: Single-shot Hyper-parameter Optimization for Federated Learning [19.854596038293277]
We introduce Federated Loss suRface Aggregation (FLoRA), the first FL-HPO solution framework.
The framework enables single-shot FL-HPO solutions with minimal additional communication overhead.
Our empirical evaluation of FLoRA for Gradient Boosted Decision Trees on seven OpenML data sets demonstrates significant model accuracy improvements.
arXiv Detail & Related papers (2021-12-15T23:18:32Z) - HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems
for HPO [30.89560505052524]
We propose HPOBench, which includes 7 existing and 5 new benchmark families, with in total more than 100 multi-fidelity benchmark problems.
HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers.
arXiv Detail & Related papers (2021-09-14T14:28:51Z) - DHA: End-to-End Joint Optimization of Data Augmentation Policy,
Hyper-parameter and Architecture [81.82173855071312]
We propose an end-to-end solution that integrates the AutoML components and returns a ready-to-use model at the end of the search.
Dha achieves state-of-the-art (SOTA) results on various datasets, especially 77.4% accuracy on ImageNet with cell based search space.
arXiv Detail & Related papers (2021-09-13T08:12:50Z) - YAHPO Gym -- Design Criteria and a new Multifidelity Benchmark for
Hyperparameter Optimization [1.0718353079920009]
We present a new surrogate-based benchmark suite for multifidelity HPO methods consisting of 9 benchmark collections that constitute over 700 multifidelity HPO problems in total.
All our benchmarks also allow for querying of multiple optimization targets, enabling the benchmarking of multi-objective HPO.
arXiv Detail & Related papers (2021-09-08T14:16:31Z) - HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on
OpenML [5.735035463793008]
We present HPO-B, a large-scale benchmark for comparing HPO algorithms.
Our benchmark is assembled and preprocessed from the OpenML repository.
We detail explicit experimental protocols, splits, and evaluation measures for comparing methods for both non-transfer and transfer learning HPO.
arXiv Detail & Related papers (2021-06-11T09:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.