Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning
- URL: http://arxiv.org/abs/2311.09441v4
- Date: Fri, 3 May 2024 07:27:18 GMT
- Title: Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning
- Authors: Joohyung Lee, Mohamed Seif, Jungchan Cho, H. Vincent Poor,
- Abstract summary: Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology.
The choice of the cut layer in SFL can have a substantial impact on the energy consumption of clients and their privacy.
This article provides a comprehensive overview of the SFL process and thoroughly analyze energy consumption and privacy.
- Score: 51.02352381270177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology, leveraging the strengths of both federated and split learning. It emphasizes the advantages of rapid convergence while addressing privacy concerns. As a result, this innovation has received significant attention from both industry and academia. However, since the model is split at a specific layer, known as a cut layer, into both client-side and server-side models for the SFL, the choice of the cut layer in SFL can have a substantial impact on the energy consumption of clients and their privacy, as it influences the training burden and the output of the client-side models. In this article, we provide a comprehensive overview of the SFL process and thoroughly analyze energy consumption and privacy. This analysis considers the influence of various system parameters on the cut layer selection strategy. Additionally, we provide an illustrative example of the cut layer selection, aiming to minimize clients' risk of reconstructing the raw data at the server while sustaining energy consumption within the required energy budget, which involves trade-offs. Finally, we address open challenges in this field. These directions represent promising avenues for future research and development.
Related papers
- A Green Multi-Attribute Client Selection for Over-The-Air Federated Learning: A Grey-Wolf-Optimizer Approach [5.277822313069301]
Over-the-air (OTA) FL was introduced to tackle these challenges by disseminating model updates without direct device-to-device connections or centralized servers.
OTA-FL brought forth limitations associated with heightened energy consumption and network latency.
We propose a multi-attribute client selection framework employing the grey wolf (GWO) to strategically control the number of participants in each round.
arXiv Detail & Related papers (2024-09-16T20:03:57Z) - Exploring Selective Layer Fine-Tuning in Federated Learning [48.470385357429215]
Federated learning (FL) has emerged as a promising paradigm for fine-tuning foundation models using distributed data.
We study selective layer fine-tuning in FL, emphasizing a flexible approach that allows the clients to adjust their selected layers according to their local data and resources.
arXiv Detail & Related papers (2024-08-28T07:48:39Z) - Federated Short-Term Load Forecasting with Personalization Layers for
Heterogeneous Clients [0.7252027234425334]
We propose a personalized FL algorithm (PL-FL) enabling FL to handle personalization layers.
PL-FL is implemented by using the Argonne Privacy-Preserving Federated Learning package.
We test the forecast performance of models trained on the NREL ComStock dataset.
arXiv Detail & Related papers (2023-09-22T21:57:52Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - FedREP: Towards Horizontal Federated Load Forecasting for Retail Energy
Providers [1.1254693939127909]
We propose a novel horizontal privacy-preserving federated learning framework for energy load forecasting, namely FedREP.
We consider a federated learning system consisting of a control centre and multiple retailers by enabling multiple REPs to build a common, robust machine learning model without sharing data.
For forecasting, we use a state-of-the-art Long Short-Term Memory (LSTM) neural network due to its ability to learn long term sequences of observations.
arXiv Detail & Related papers (2022-03-01T04:16:19Z) - Context-Aware Online Client Selection for Hierarchical Federated
Learning [33.205640790962505]
Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
arXiv Detail & Related papers (2021-12-02T01:47:01Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z) - A Framework for Energy and Carbon Footprint Analysis of Distributed and
Federated Edge Learning [48.63610479916003]
This article breaks down and analyzes the main factors that influence the environmental footprint of distributed learning policies.
It models both vanilla and decentralized FL policies driven by consensus.
Results show that FL allows remarkable end-to-end energy savings (30%-40%) for wireless systems characterized by low bit/Joule efficiency.
arXiv Detail & Related papers (2021-03-18T16:04:42Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.