FedSmart: An Auto Updating Federated Learning Optimization Mechanism
- URL: http://arxiv.org/abs/2009.07455v1
- Date: Wed, 16 Sep 2020 03:59:33 GMT
- Title: FedSmart: An Auto Updating Federated Learning Optimization Mechanism
- Authors: Anxun He, Jianzong Wang, Zhangcheng Huang and Jing Xiao
- Abstract summary: Federated learning has made an important contribution to data privacy-preserving.
Some existing methods of ensuring the model robustness on non-IID data, like the data-sharing strategy or pretraining, may lead to privacy leaking.
In this paper, a performance-based parameter return method for optimization is introduced, we term it FederatedSmart.
- Score: 23.842595615337565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has made an important contribution to data
privacy-preserving. Many previous works are based on the assumption that the
data are independently identically distributed (IID). As a result, the model
performance on non-identically independently distributed (non-IID) data is
beyond expectation, which is the concrete situation. Some existing methods of
ensuring the model robustness on non-IID data, like the data-sharing strategy
or pretraining, may lead to privacy leaking. In addition, there exist some
participants who try to poison the model with low-quality data. In this paper,
a performance-based parameter return method for optimization is introduced, we
term it FederatedSmart (FedSmart). It optimizes different model for each client
through sharing global gradients, and it extracts the data from each client as
a local validation set, and the accuracy that model achieves in round t
determines the weights of the next round. The experiment results show that
FedSmart enables the participants to allocate a greater weight to the ones with
similar data distribution.
Related papers
- FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Dirichlet-based Uncertainty Quantification for Personalized Federated
Learning with Improved Posterior Networks [9.54563359677778]
This paper presents a new approach to federated learning that allows selecting a model from global and personalized ones.
It is achieved through a careful modeling of predictive uncertainties that helps to detect local and global in- and out-of-distribution data.
The comprehensive experimental evaluation on the popular real-world image datasets shows the superior performance of the model in the presence of out-of-distribution data.
arXiv Detail & Related papers (2023-12-18T14:30:05Z) - pFedSim: Similarity-Aware Model Aggregation Towards Personalized
Federated Learning [27.668944118750115]
federated learning (FL) paradigm emerges to preserve data privacy during model training.
One of biggest challenges in FL lies in the non-IID (not identical and independently distributed) data.
We propose a novel pFedSim (pFL based on model similarity) algorithm in this work.
arXiv Detail & Related papers (2023-05-25T04:25:55Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Federated Visual Classification with Real-World Data Distribution [9.564468846277366]
We characterize the effect real-world data distributions have on distributed learning, using as a benchmark the standard Federated Averaging (FedAvg) algorithm.
We introduce two new large-scale datasets for species and landmark classification, with realistic per-user data splits.
We also develop two new algorithms (FedVC, FedIR) that intelligently resample and reweight over the client pool, bringing large improvements in accuracy and stability in training.
arXiv Detail & Related papers (2020-03-18T07:55:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.