Asynchronous Federated Learning with Incentive Mechanism Based on
Contract Theory
- URL: http://arxiv.org/abs/2310.06448v1
- Date: Tue, 10 Oct 2023 09:17:17 GMT
- Title: Asynchronous Federated Learning with Incentive Mechanism Based on
Contract Theory
- Authors: Danni Yang, Yun Ji, Zhoubin Kou, Xiaoxiong Zhong, Sheng Zhang
- Abstract summary: We propose a novel asynchronous FL framework that integrates an incentive mechanism based on contract theory.
Our framework exhibits a 1.35% accuracy improvement over the ideal Local SGD under attacks.
- Score: 5.502596101979607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To address the challenges posed by the heterogeneity inherent in federated
learning (FL) and to attract high-quality clients, various incentive mechanisms
have been employed. However, existing incentive mechanisms are typically
utilized in conventional synchronous aggregation, resulting in significant
straggler issues. In this study, we propose a novel asynchronous FL framework
that integrates an incentive mechanism based on contract theory. Within the
incentive mechanism, we strive to maximize the utility of the task publisher by
adaptively adjusting clients' local model training epochs, taking into account
factors such as time delay and test accuracy. In the asynchronous scheme,
considering client quality, we devise aggregation weights and an access control
algorithm to facilitate asynchronous aggregation. Through experiments conducted
on the MNIST dataset, the simulation results demonstrate that the test accuracy
achieved by our framework is 3.12% and 5.84% higher than that achieved by
FedAvg and FedProx without any attacks, respectively. The framework exhibits a
1.35% accuracy improvement over the ideal Local SGD under attacks. Furthermore,
aiming for the same target accuracy, our framework demands notably less
computation time than both FedAvg and FedProx.
Related papers
- Aiding Global Convergence in Federated Learning via Local Perturbation and Mutual Similarity Information [6.767885381740953]
Federated learning has emerged as a distributed optimization paradigm.
We propose a novel modified framework wherein each client locally performs a perturbed gradient step.
We show that our algorithm speeds convergence up to a margin of 30 global rounds compared with FedAvg.
arXiv Detail & Related papers (2024-10-07T23:14:05Z) - FedStaleWeight: Buffered Asynchronous Federated Learning with Fair Aggregation via Staleness Reweighting [9.261784956541641]
Asynchronous Federated Learning (AFL) methods have emerged as promising alternatives to their synchronous counterparts by the slowest agent.
AFL model training heavily towards agents who can produce updates faster, leaving slower agents behind.
We introduce FedStaleWeight, an algorithm addressing in aggregating asynchronous client updates by employing average staleness to compute fair re-weightings.
arXiv Detail & Related papers (2024-06-05T02:52:22Z) - On the Role of Server Momentum in Federated Learning [85.54616432098706]
We propose a general framework for server momentum, that (a) covers a large class of momentum schemes that are unexplored in federated learning (FL)
We provide rigorous convergence analysis for the proposed framework.
arXiv Detail & Related papers (2023-12-19T23:56:49Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Adaptive Federated Learning via New Entropy Approach [14.595709494370372]
Federated Learning (FL) has emerged as a prominent distributed machine learning framework.
In this paper, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients.
arXiv Detail & Related papers (2023-03-27T07:57:04Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - A General Theory for Federated Optimization with Asynchronous and
Heterogeneous Clients Updates [10.609815608017065]
We extend the standard FedAvg aggregation scheme by introducing aggregation weights to represent the variability of the clients update time.
Our formalism applies to the general federated setting where clients have heterogeneous datasets and perform at least one step of gradient descent.
We develop in this work FedFix, a novel extension of FedAvg enabling efficient asynchronous federated training while preserving the convergence stability of synchronous aggregation.
arXiv Detail & Related papers (2022-06-21T08:46:05Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.