Value of Information and Timing-aware Scheduling for Federated Learning
- URL: http://arxiv.org/abs/2312.10512v1
- Date: Sat, 16 Dec 2023 17:51:22 GMT
- Title: Value of Information and Timing-aware Scheduling for Federated Learning
- Authors: Muhammad Azeem Khan, Howard H. Yang, Zihan Chen, Antonio Iera,
Nikolaos Pappas
- Abstract summary: Federated Learning (FL) offers a solution by preserving data privacy during training.
FL brings the model directly to User Equipments (UEs) for local training by an access point (AP)
- Score: 24.40692354824834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data possesses significant value as it fuels advancements in AI. However,
protecting the privacy of the data generated by end-user devices has become
crucial. Federated Learning (FL) offers a solution by preserving data privacy
during training. FL brings the model directly to User Equipments (UEs) for
local training by an access point (AP). The AP periodically aggregates trained
parameters from UEs, enhancing the model and sending it back to them. However,
due to communication constraints, only a subset of UEs can update parameters
during each global aggregation. Consequently, developing innovative scheduling
algorithms is vital to enable complete FL implementation and enhance FL
convergence. In this paper, we present a scheduling policy combining Age of
Update (AoU) concepts and data Shapley metrics. This policy considers the
freshness and value of received parameter updates from individual data sources
and real-time channel conditions to enhance FL's operational efficiency. The
proposed algorithm is simple, and its effectiveness is demonstrated through
simulations.
Related papers
- FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - StatAvg: Mitigating Data Heterogeneity in Federated Learning for Intrusion Detection Systems [22.259297167311964]
Federated learning (FL) is a decentralized learning technique that enables devices to collaboratively build a shared Machine Leaning (ML) or Deep Learning (DL) model without revealing their raw data to a third party.
Due to its privacy-preserving nature, FL has sparked widespread attention for building Intrusion Detection Systems (IDS) within the realm of cybersecurity.
We propose an effective method called Statistical Averaging (StatAvg) to alleviate non-independently and identically (non-iid) distributed features across local clients' data in FL.
arXiv Detail & Related papers (2024-05-20T14:41:59Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Version age-based client scheduling policy for federated learning [25.835001146856396]
Federated Learning (FL) has emerged as a privacy-preserving machine learning paradigm facilitating collaborative training across multiple clients without sharing local data.
Despite advancements in edge device capabilities, communication bottlenecks present challenges in aggregating a large number of clients.
This phenomenon introduces the critical challenge of stragglers in FL and the profound impact of client scheduling policies on global model convergence and stability.
arXiv Detail & Related papers (2024-02-08T04:48:51Z) - Federated Learning with Reduced Information Leakage and Computation [17.069452700698047]
Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
This paper introduces Upcycled-FL, a strategy that applies first-order approximation at every even round of model update.
Under this strategy, half of the FL updates incur no information leakage and require much less computational and transmission costs.
arXiv Detail & Related papers (2023-10-10T06:22:06Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - User Scheduling for Federated Learning Through Over-the-Air Computation [22.853678584121862]
A new machine learning technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process.
FL not only reduces the communication needs but also helps to protect the local privacy.
AirComp is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation.
arXiv Detail & Related papers (2021-08-05T23:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.