Enabling Long-Term Cooperation in Cross-Silo Federated Learning: A
Repeated Game Perspective
- URL: http://arxiv.org/abs/2106.11814v1
- Date: Tue, 22 Jun 2021 14:27:30 GMT
- Title: Enabling Long-Term Cooperation in Cross-Silo Federated Learning: A
Repeated Game Perspective
- Authors: Ning Zhang, Qian Ma, Xu Chen
- Abstract summary: Cross-silo federated learning (FL) is a distributed learning approach where clients train a global model cooperatively while keeping their local data private.
We model the long-term selfish participation behaviors of clients as an infinitely repeated game.
We derive a cooperative strategy for clients which minimizes the number of free riders while increasing the amount of local data for model training.
- Score: 16.91343945299973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-silo federated learning (FL) is a distributed learning approach where
clients train a global model cooperatively while keeping their local data
private. Different from cross-device FL, clients in cross-silo FL are usually
organizations or companies which may execute multiple cross-silo FL processes
repeatedly due to their time-varying local data sets, and aim to optimize their
long-term benefits by selfishly choosing their participation levels. While
there has been some work on incentivizing clients to join FL, the analysis of
the long-term selfish participation behaviors of clients in cross-silo FL
remains largely unexplored. In this paper, we analyze the selfish participation
behaviors of heterogeneous clients in cross-silo FL. Specifically, we model the
long-term selfish participation behaviors of clients as an infinitely repeated
game, with the stage game being a selfish participation game in one cross-silo
FL process (SPFL). For the stage game SPFL, we derive the unique Nash
equilibrium (NE), and propose a distributed algorithm for each client to
calculate its equilibrium participation strategy. For the long-term
interactions among clients, we derive a cooperative strategy for clients which
minimizes the number of free riders while increasing the amount of local data
for model training. We show that enforced by a punishment strategy, such a
cooperative strategy is a SPNE of the infinitely repeated game, under which
some clients who are free riders at the NE of the stage game choose to be
(partial) contributors. We further propose an algorithm to calculate the
optimal SPNE which minimizes the number of free riders while maximizing the
amount of local data for model training. Simulation results show that our
proposed cooperative strategy at the optimal SPNE can effectively reduce the
number of free riders and increase the amount of local data for model training.
Related papers
- Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - How Can Incentives and Cut Layer Selection Influence Data Contribution in Split Federated Learning? [49.16923922018379]
Split Federated Learning (SFL) has emerged as a promising approach by combining the advantages of federated and split learning.
We model the problem using a hierarchical decision-making approach, formulated as a single-leader multi-follower Stackelberg game.
Our findings show that the Stackelberg equilibrium solution maximizes the utility for both the clients and the SFL model owner.
arXiv Detail & Related papers (2024-12-10T06:24:08Z) - Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training [21.89214794178211]
In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space.
We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training.
Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods.
arXiv Detail & Related papers (2024-06-21T13:19:29Z) - Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning [56.21666819468249]
Resource constraints of clients and communication costs pose major problems for training large models in Federated Learning.
We introduce Sparse-ProxSkip, which combines training and acceleration in a sparse setting.
We demonstrate the good performance of Sparse-ProxSkip in extensive experiments.
arXiv Detail & Related papers (2024-05-31T05:21:12Z) - Optimizing the Collaboration Structure in Cross-Silo Federated Learning [43.388911479025225]
In federated learning (FL), multiple clients collaborate to train machine learning models together.
We propose FedCollab, a novel FL framework that alleviates negative transfer by clustering clients into non-overlapping coalitions.
Our results demonstrate that FedCollab effectively mitigates negative transfer across a wide range of FL algorithms and consistently outperforms other clustered FL algorithms.
arXiv Detail & Related papers (2023-06-10T18:59:50Z) - Free-Rider Games for Federated Learning with Selfish Clients in NextG
Wireless Networks [1.1726528038065764]
This paper presents a game theoretic framework for participation and free-riding in federated learning (FL)
FL is used by clients to support spectrum sensing for NextG communications.
Free-riding behavior may potentially decrease the global accuracy due to lack of contribution to global model learning.
arXiv Detail & Related papers (2022-12-21T17:10:55Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - To Federate or Not To Federate: Incentivizing Client Participation in
Federated Learning [22.3101738137465]
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model.
In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model.
arXiv Detail & Related papers (2022-05-30T04:03:31Z) - Combating Client Dropout in Federated Learning via Friend Model
Substitution [8.325089307976654]
Federated learning (FL) is a new distributed machine learning framework known for its benefits on data privacy and communication efficiency.
This paper studies a passive partial client participation scenario that is much less well understood.
We develop a new algorithm FL-FDMS that discovers friends of clients whose data distributions are similar.
Experiments on MNIST and CIFAR-10 confirmed the superior performance of FL-FDMS in handling client dropout in FL.
arXiv Detail & Related papers (2022-05-26T08:34:28Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.