FedADMM: A Federated Primal-Dual Algorithm Allowing Partial
Participation
- URL: http://arxiv.org/abs/2203.15104v1
- Date: Mon, 28 Mar 2022 21:20:43 GMT
- Title: FedADMM: A Federated Primal-Dual Algorithm Allowing Partial
Participation
- Authors: Han Wang, Siddartha Marella, James Anderson
- Abstract summary: In particular, it follows a client-server broadcast model and is particularly appealing to its ability to accommodate in client compute and storage.
Our contribution is to offer a new federated learning algorithm, FedADMM, for solving non-smooth composite problems.
- Score: 3.7677951749356686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a framework for distributed optimization that places
emphasis on communication efficiency. In particular, it follows a client-server
broadcast model and is particularly appealing because of its ability to
accommodate heterogeneity in client compute and storage resources, non-i.i.d.
data assumptions, and data privacy. Our contribution is to offer a new
federated learning algorithm, FedADMM, for solving non-convex composite
optimization problems with non-smooth regularizers. We prove converges of
FedADMM for the case when not all clients are able to participate in a given
communication round under a very general sampling model.
Related papers
- FedADMM-InSa: An Inexact and Self-Adaptive ADMM for Federated Learning [1.802525429431034]
We propose an inexact and self-adaptive FedADMM algorithm, termed FedADMM-InSa.
The convergence of the resulting inexact ADMM is proved under the assumption of strongly convex loss functions.
Our proposed algorithm can reduce the clients' local computational load significantly and also accelerate the learning process compared to the vanilla FedADMM.
arXiv Detail & Related papers (2024-02-21T18:19:20Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Adaptive Federated Learning via New Entropy Approach [14.595709494370372]
Federated Learning (FL) has emerged as a prominent distributed machine learning framework.
In this paper, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients.
arXiv Detail & Related papers (2023-03-27T07:57:04Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - FedADMM: A Robust Federated Deep Learning Framework with Adaptivity to
System Heterogeneity [4.2059108111562935]
Federated Learning (FL) is an emerging framework for distributed processing of large data volumes by edge devices.
In this paper, we introduce a new FLAD FedADMM based protocol.
We show that FedADMM consistently outperforms all baseline methods in terms of communication efficiency.
arXiv Detail & Related papers (2022-04-07T15:58:33Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.