FedFNN: Faster Training Convergence Through Update Predictions in
Federated Recommender Systems
- URL: http://arxiv.org/abs/2309.08635v1
- Date: Thu, 14 Sep 2023 13:18:43 GMT
- Title: FedFNN: Faster Training Convergence Through Update Predictions in
Federated Recommender Systems
- Authors: Francesco Fabbri, Xianghang Liu, Jack R. McKenzie, Bartlomiej
Twardowski, and Tri Kurniawan Wijaya
- Abstract summary: Federated Learning (FL) has emerged as a key approach for distributed machine learning.
This paper introduces FedFNN, an algorithm that accelerates decentralized model training.
- Score: 4.4273123155989715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has emerged as a key approach for distributed machine
learning, enhancing online personalization while ensuring user data privacy.
Instead of sending private data to a central server as in traditional
approaches, FL decentralizes computations: devices train locally and share
updates with a global server. A primary challenge in this setting is achieving
fast and accurate model training - vital for recommendation systems where
delays can compromise user engagement. This paper introduces FedFNN, an
algorithm that accelerates decentralized model training. In FL, only a subset
of users are involved in each training epoch. FedFNN employs supervised
learning to predict weight updates from unsampled users, using updates from the
sampled set. Our evaluations, using real and synthetic data, show: 1. FedFNN
achieves training speeds 5x faster than leading methods, maintaining or
improving accuracy; 2. the algorithm's performance is consistent regardless of
client cluster variations; 3. FedFNN outperforms other methods in scenarios
with limited client availability, converging more quickly.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Aergia: Leveraging Heterogeneity in Federated Learning Systems [5.0650178943079]
Federated Learning (FL) relies on clients to update a global model using their local datasets.
Aergia is a novel approach where slow clients freeze the part of their model that is the most computationally intensive to train.
Aergia significantly reduces the training time under heterogeneous settings by up to 27% and 53% compared to FedAvg and TiFL, respectively.
arXiv Detail & Related papers (2022-10-12T12:59:18Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - FedPrune: Towards Inclusive Federated Learning [1.308951527147782]
Federated learning (FL) is a distributed learning technique that trains a shared model over distributed data in a privacy-preserving manner.
We propose FedPrune; a system that tackles this challenge by pruning the global model for slow clients based on their device characteristics.
By using insights from Central Limit Theorem, FedPrune incorporates a new aggregation technique that achieves robust performance over non-IID data.
arXiv Detail & Related papers (2021-10-27T06:33:38Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - FedNNNN: Norm-Normalized Neural Network Aggregation for Fast and
Accurate Federated Learning [11.414092409682162]
Federated learning (FL) is a distributed learning protocol in which a server needs to aggregate a set of models learned some independent clients to proceed the learning process.
At present, model averaging, known as FedAvg, is one of the most widely adapted aggregation techniques.
In this work, we find out that averaging models from different clients significantly diminishes the norm of the update vectors, resulting in slow learning rate and low prediction accuracy.
arXiv Detail & Related papers (2020-08-11T06:21:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.