FedControl: When Control Theory Meets Federated Learning
- URL: http://arxiv.org/abs/2205.14236v1
- Date: Fri, 27 May 2022 21:05:52 GMT
- Title: FedControl: When Control Theory Meets Federated Learning
- Authors: Adnan Ben Mansour, Gaia Carenini, Alexandre Duplessis and David
Naccache
- Abstract summary: We distinguish client contributions according to the performance of local learning and its evolution.
The technique is inspired from control theory and its classification performance is evaluated extensively in IID framework.
- Score: 63.96013144017572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To date, the most popular federated learning algorithms use coordinate-wise
averaging of the model parameters. We depart from this approach by
differentiating client contributions according to the performance of local
learning and its evolution. The technique is inspired from control theory and
its classification performance is evaluated extensively in IID framework and
compared with FedAvg.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - A General Control-Theoretic Approach for Reinforcement Learning: Theory and Algorithms [7.081523472610874]
We devise a control-theoretic reinforcement learning approach to support direct learning of the optimal policy.
We empirically evaluate our approach on several classical reinforcement learning tasks.
arXiv Detail & Related papers (2024-06-20T21:50:46Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Reinforcement Federated Learning Method Based on Adaptive OPTICS
Clustering [19.73560248813166]
This paper proposes an adaptive OPTICS clustering algorithm for federated learning.
By perceiving the clustering environment as a Markov decision process, the goal is to find the best parameters of the OPTICS cluster.
The reliability and practicability of this method have been verified on the experimental data, and its effec-tiveness and superiority have been proved.
arXiv Detail & Related papers (2023-06-22T13:11:19Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Federated Ensemble Model-based Reinforcement Learning in Edge Computing [21.840086997141498]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm.
We propose a novel FRL algorithm that effectively incorporates model-based RL and ensemble knowledge distillation into FL for the first time.
Specifically, we utilise FL and knowledge distillation to create an ensemble of dynamics models for clients, and then train the policy by solely using the ensemble model without interacting with the environment.
arXiv Detail & Related papers (2021-09-12T16:19:10Z) - Federated Learning with Communication Delay in Edge Networks [5.500965885412937]
Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks.
This work addresses an important consideration of federated learning at the network edge: communication delays between the edge nodes and the aggregator.
A technique called FedDelAvg (federated delayed averaging) is developed, which generalizes the standard federated averaging algorithm to incorporate a weighting between the current local model and the delayed global model received at each device during the synchronization step.
arXiv Detail & Related papers (2020-08-21T06:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.