Particle Swarm Optimized Federated Learning For Industrial IoT and Smart
City Services
- URL: http://arxiv.org/abs/2009.02560v1
- Date: Sat, 5 Sep 2020 16:20:47 GMT
- Title: Particle Swarm Optimized Federated Learning For Industrial IoT and Smart
City Services
- Authors: Basheer Qolomany, Kashif Ahmad, Ala Al-Fuqaha, Junaid Qadir
- Abstract summary: We propose a Particle Swarm Optimization (PSO)-based technique to optimize the hyper parameter settings for the local Machine Learning models.
We evaluate the performance of our proposed technique using two case studies.
- Score: 9.693848515371268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the research on Federated Learning (FL) has focused on analyzing
global optimization, privacy, and communication, with limited attention
focusing on analyzing the critical matter of performing efficient local
training and inference at the edge devices. One of the main challenges for
successful and efficient training and inference on edge devices is the careful
selection of parameters to build local Machine Learning (ML) models. To this
aim, we propose a Particle Swarm Optimization (PSO)-based technique to optimize
the hyperparameter settings for the local ML models in an FL environment. We
evaluate the performance of our proposed technique using two case studies.
First, we consider smart city services and use an experimental transportation
dataset for traffic prediction as a proxy for this setting. Second, we consider
Industrial IoT (IIoT) services and use the real-time telemetry dataset to
predict the probability that a machine will fail shortly due to component
failures. Our experiments indicate that PSO provides an efficient approach for
tuning the hyperparameters of deep Long short-term memory (LSTM) models when
compared to the grid search method. Our experiments illustrate that the number
of clients-server communication rounds to explore the landscape of
configurations to find the near-optimal parameters are greatly reduced (roughly
by two orders of magnitude needing only 2%--4% of the rounds compared to state
of the art non-PSO-based approaches). We also demonstrate that utilizing the
proposed PSO-based technique to find the near-optimal configurations for FL and
centralized learning models does not adversely affect the accuracy of the
models.
Related papers
- Hyper-parameter Optimization for Federated Learning with Step-wise Adaptive Mechanism [0.48342038441006796]
Federated Learning (FL) is a decentralized learning approach that protects sensitive information by utilizing local model parameters rather than sharing clients' raw datasets.
This paper investigates the deployment and integration of two lightweight Hyper- Optimization (HPO) tools, Raytune and Optuna, within the context of FL settings.
To this end, both local and global feedback mechanisms are integrated to limit the search space and expedite the HPO process.
arXiv Detail & Related papers (2024-11-19T05:49:00Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Heterogeneity-Aware Resource Allocation and Topology Design for Hierarchical Federated Edge Learning [9.900317349372383]
Federated Learning (FL) provides a privacy-preserving framework for training machine learning models on mobile edge devices.
Traditional FL algorithms, e.g., FedAvg, impose a heavy communication workload on these devices.
We propose a two-tier HFEL system, where edge devices are connected to edge servers and edge servers are interconnected through peer-to-peer (P2P) edge backhauls.
Our goal is to enhance the training efficiency of the HFEL system through strategic resource allocation and topology design.
arXiv Detail & Related papers (2024-09-29T01:48:04Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - Sample-Driven Federated Learning for Energy-Efficient and Real-Time IoT
Sensing [22.968661040226756]
We introduce an online reinforcement learning algorithm named Sample-driven Control for Federated Learning (SCFL) built on the Soft Actor-Critic (A2C) framework.
SCFL enables the agent to dynamically adapt and find the global optima even in changing environments.
arXiv Detail & Related papers (2023-10-11T13:50:28Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Evaluation of Hyperparameter-Optimization Approaches in an Industrial
Federated Learning System [0.2609784101826761]
Federated Learning (FL) decouples model training from the need for direct access to the data.
In this work, we investigated the impact of different hyperparameter optimization approaches in an FL system.
We implemented these approaches based on grid search and Bayesian optimization and evaluated the algorithms on the MNIST data set and on the Internet of Things (IoT) sensor based industrial data set.
arXiv Detail & Related papers (2021-10-15T17:01:40Z) - Budgeted Online Selection of Candidate IoT Clients to Participate in
Federated Learning [33.742677763076]
Federated Learning (FL) is an architecture in which model parameters are exchanged instead of client data.
FL trains a global model by communicating with clients over communication rounds.
We propose an online stateful FL to find the best candidate clients and an IoT client alarm application.
arXiv Detail & Related papers (2020-11-16T06:32:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.