How Can Incentives and Cut Layer Selection Influence Data Contribution in Split Federated Learning?
- URL: http://arxiv.org/abs/2412.07813v3
- Date: Thu, 23 Jan 2025 09:47:49 GMT
- Title: How Can Incentives and Cut Layer Selection Influence Data Contribution in Split Federated Learning?
- Authors: Joohyung Lee, Jungchan Cho, Wonjun Lee, Mohamed Seif, H. Vincent Poor,
- Abstract summary: Split Federated Learning (SFL) has emerged as a promising approach by combining the advantages of federated and split learning.
We model the problem using a hierarchical decision-making approach, formulated as a single-leader multi-follower Stackelberg game.
Our findings show that the Stackelberg equilibrium solution maximizes the utility for both the clients and the SFL model owner.
- Score: 49.16923922018379
- License:
- Abstract: To alleviate the training burden in federated learning while enhancing convergence speed, Split Federated Learning (SFL) has emerged as a promising approach by combining the advantages of federated and split learning. However, recent studies have largely overlooked competitive situations. In this framework, the SFL model owner can choose the cut layer to balance the training load between the server and clients, ensuring the necessary level of privacy for the clients. Additionally, the SFL model owner sets incentives to encourage client participation in the SFL process. The optimization strategies employed by the SFL model owner influence clients' decisions regarding the amount of data they contribute, taking into account the shared incentives over clients and anticipated energy consumption during SFL. To address this framework, we model the problem using a hierarchical decision-making approach, formulated as a single-leader multi-follower Stackelberg game. We demonstrate the existence and uniqueness of the Nash equilibrium among clients and analyze the Stackelberg equilibrium by examining the leader's game. Furthermore, we discuss privacy concerns related to differential privacy and the criteria for selecting the minimum required cut layer. Our findings show that the Stackelberg equilibrium solution maximizes the utility for both the clients and the SFL model owner.
Related papers
- Towards Seamless Hierarchical Federated Learning under Intermittent Client Participation: A Stagewise Decision-Making Methodology [31.571298699019255]
Hierarchical Federated Learning (HFL) organizes clients into multiple clusters and utilizes edge nodes for intermediate model aggregations.
Current research on HFL mainly focus on enhancing model accuracy, latency, and energy consumption in scenarios with a stable/fixed set of clients.
This study delves into optimizing client selection and client-to-edge associations in HFL under intermittent client participation.
arXiv Detail & Related papers (2025-02-13T13:16:10Z) - Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning [51.02352381270177]
Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology.
The choice of the cut layer in SFL can have a substantial impact on the energy consumption of clients and their privacy.
This article provides a comprehensive overview of the SFL process and thoroughly analyze energy consumption and privacy.
arXiv Detail & Related papers (2023-11-15T23:23:42Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Context-Aware Online Client Selection for Hierarchical Federated
Learning [33.205640790962505]
Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
arXiv Detail & Related papers (2021-12-02T01:47:01Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.