Evaluation of Hyperparameter-Optimization Approaches in an Industrial
Federated Learning System
- URL: http://arxiv.org/abs/2110.08202v1
- Date: Fri, 15 Oct 2021 17:01:40 GMT
- Title: Evaluation of Hyperparameter-Optimization Approaches in an Industrial
Federated Learning System
- Authors: Stephanie Holly, Thomas Hiessl, Safoura Rezapour Lakani, Daniel
Schall, Clemens Heitzinger, Jana Kemnitz
- Abstract summary: Federated Learning (FL) decouples model training from the need for direct access to the data.
In this work, we investigated the impact of different hyperparameter optimization approaches in an FL system.
We implemented these approaches based on grid search and Bayesian optimization and evaluated the algorithms on the MNIST data set and on the Internet of Things (IoT) sensor based industrial data set.
- Score: 0.2609784101826761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) decouples model training from the need for direct
access to the data and allows organizations to collaborate with industry
partners to reach a satisfying level of performance without sharing vulnerable
business information. The performance of a machine learning algorithm is highly
sensitive to the choice of its hyperparameters. In an FL setting,
hyperparameter optimization poses new challenges. In this work, we investigated
the impact of different hyperparameter optimization approaches in an FL system.
In an effort to reduce communication costs, a critical bottleneck in FL, we
investigated a local hyperparameter optimization approach that -- in contrast
to a global hyperparameter optimization approach -- allows every client to have
its own hyperparameter configuration. We implemented these approaches based on
grid search and Bayesian optimization and evaluated the algorithms on the MNIST
data set using an i.i.d. partition and on an Internet of Things (IoT) sensor
based industrial data set using a non-i.i.d. partition.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - Sample-Driven Federated Learning for Energy-Efficient and Real-Time IoT
Sensing [22.968661040226756]
We introduce an online reinforcement learning algorithm named Sample-driven Control for Federated Learning (SCFL) built on the Soft Actor-Critic (A2C) framework.
SCFL enables the agent to dynamically adapt and find the global optima even in changing environments.
arXiv Detail & Related papers (2023-10-11T13:50:28Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - FedAVO: Improving Communication Efficiency in Federated Learning with
African Vultures Optimizer [0.0]
Federated Learning (FL) is a distributed machine learning technique.
In this paper, we introduce FedAVO, a novel FL algorithm that enhances communication effectiveness.
We show that FedAVO achieves significant improvement in terms of model accuracy and communication round.
arXiv Detail & Related papers (2023-05-02T02:04:19Z) - A Framework for History-Aware Hyperparameter Optimisation in
Reinforcement Learning [8.659973888018781]
A Reinforcement Learning (RL) system depends on a set of initial conditions that affect the system's performance.
We propose a framework based on integrating complex event processing and temporal models, to alleviate these trade-offs.
We tested the proposed approach in a 5G mobile communications case study that uses DQN, a variant of RL, for its decision-making.
arXiv Detail & Related papers (2023-03-09T11:30:40Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Automatic tuning of hyper-parameters of reinforcement learning
algorithms using Bayesian optimization with behavioral cloning [0.0]
In reinforcement learning (RL), the information content of data gathered by the learning agent is dependent on the setting of many hyper- parameters.
In this work, a novel approach for autonomous hyper- parameter setting using Bayesian optimization is proposed.
Experiments reveal promising results compared to other manual tweaking and optimization-based approaches.
arXiv Detail & Related papers (2021-12-15T13:10:44Z) - JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data [86.8949732640035]
We propose JUMBO, an MBO algorithm that sidesteps limitations by querying additional data.
We show that it achieves no-regret under conditions analogous to GP-UCB.
Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems.
arXiv Detail & Related papers (2021-06-02T05:03:38Z) - Particle Swarm Optimized Federated Learning For Industrial IoT and Smart
City Services [9.693848515371268]
We propose a Particle Swarm Optimization (PSO)-based technique to optimize the hyper parameter settings for the local Machine Learning models.
We evaluate the performance of our proposed technique using two case studies.
arXiv Detail & Related papers (2020-09-05T16:20:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.