Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training
- URL: http://arxiv.org/abs/2211.05554v1
- Date: Thu, 10 Nov 2022 13:20:56 GMT
- Title: Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training
- Authors: Yueqi Xie, Weizhong Zhang, Renjie Pi, Fangzhao Wu, Qifeng Chen, Xing
Xie, Sunghun Kim
- Abstract summary: Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
- Score: 80.03567604524268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-IID data distribution across clients and poisoning attacks are two main
challenges in real-world federated learning systems. While both of them have
attracted great research interest with specific strategies developed, no known
solution manages to address them in a unified framework. To jointly overcome
both challenges, we propose SmartFL, a generic approach that optimizes the
server-side aggregation process with a small clean server-collected proxy
dataset (e.g., around one hundred samples, 0.2% of the dataset) via a subspace
training technique. Specifically, the aggregation weight of each participating
client at each round is optimized using the server-collected proxy data, which
is essentially the optimization of the global model in the convex hull spanned
by client models. Since at each round, the number of tunable parameters
optimized on the server side equals the number of participating clients (thus
independent of the model size), we are able to train a global model with
massive parameters using only a small amount of proxy data. We provide
theoretical analyses of the convergence and generalization capacity for
SmartFL. Empirically, SmartFL achieves state-of-the-art performance on both
federated learning with non-IID data distribution and federated learning with
malicious clients. The source code will be released.
Related papers
- Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - LEFL: Low Entropy Client Sampling in Federated Learning [6.436397118145477]
Federated learning (FL) is a machine learning paradigm where multiple clients collaborate to optimize a single global model using their private data.
We propose LEFL, an alternative sampling strategy by performing a one-time clustering of clients based on their model's learned high-level features.
We show of sampled clients selected with this approach yield a low relative entropy with respect to the global data distribution.
arXiv Detail & Related papers (2023-12-29T01:44:20Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - ON-DEMAND-FL: A Dynamic and Efficient Multi-Criteria Federated Learning
Client Deployment Scheme [37.099990745974196]
We introduce an On-Demand-FL, a client deployment approach for federated learning.
We make use of containerization technology such as Docker to build efficient environments.
The Genetic algorithm (GA) is used to solve the multi-objective optimization problem.
arXiv Detail & Related papers (2022-11-05T13:41:19Z) - FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for
Non-IID Data in Federated Learning [4.02923738318937]
Uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning.
This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew.
We propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor.
arXiv Detail & Related papers (2022-08-04T04:24:16Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - FedSmart: An Auto Updating Federated Learning Optimization Mechanism [23.842595615337565]
Federated learning has made an important contribution to data privacy-preserving.
Some existing methods of ensuring the model robustness on non-IID data, like the data-sharing strategy or pretraining, may lead to privacy leaking.
In this paper, a performance-based parameter return method for optimization is introduced, we term it FederatedSmart.
arXiv Detail & Related papers (2020-09-16T03:59:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.