Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning
- URL: http://arxiv.org/abs/2602.17614v1
- Date: Thu, 19 Feb 2026 18:40:12 GMT
- Title: Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning
- Authors: Obaidullah Zaland, Sajib Mistry, Monowar Bhuyan,
- Abstract summary: Federated learning enables decentralized training of machine learning (ML) models across clients without data centralization.<n> intermediate representations shared by clients with the server are prone to exposing clients' private data.<n>This work proposes k-anonymous differentially private UFSL to minimize data leakage from the smashed data transferred to the server.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Big data scenarios, where massive, heterogeneous datasets are distributed across clients, demand scalable, privacy-preserving learning methods. Federated learning (FL) enables decentralized training of machine learning (ML) models across clients without data centralization. Decentralized training, however, introduces a computational burden on client devices. U-shaped federated split learning (UFSL) offloads a fraction of the client computation to the server while keeping both data and labels on the clients' side. However, the intermediate representations (i.e., smashed data) shared by clients with the server are prone to exposing clients' private data. To reduce exposure of client data through intermediate data representations, this work proposes k-anonymous differentially private UFSL (KD-UFSL), which leverages privacy-enhancing techniques such as microaggregation and differential privacy to minimize data leakage from the smashed data transferred to the server. We first demonstrate that an adversary can access private client data from intermediate representations via a data-reconstruction attack, and then present a privacy-enhancing solution, KD-UFSL, to mitigate this risk. Our experiments indicate that, alongside increasing the mean squared error between the actual and reconstructed images by up to 50% in some cases, KD-UFSL also decreases the structural similarity between them by up to 40% on four benchmarking datasets. More importantly, KD-UFSL improves privacy while preserving the utility of the global model. This highlights its suitability for large-scale big data applications where privacy and utility must be balanced.
Related papers
- Efficient Federated Learning with Encrypted Data Sharing for Data-Heterogeneous Edge Devices [12.709837838251952]
We propose a new federated learning scheme on edge devices called Federated Learning with Encrypted Data Sharing.<n>FedEDS uses the client model and the model's layer to train the data encryptor and share it with other clients.<n>This approach accelerates the convergence speed of federated learning training and mitigates the negative impact of data heterogeneity.
arXiv Detail & Related papers (2025-06-25T17:40:54Z) - FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA Subparameter Updates [58.18162789618869]
Federated Learning (FL) is a widely used framework for training models in a decentralized manner.<n>We propose the FedRand framework, which avoids disclosing the full set of client parameters.<n>We empirically validate that FedRand improves robustness against MIAs compared to relevant baselines.
arXiv Detail & Related papers (2025-03-10T11:55:50Z) - Optimal Strategies for Federated Learning Maintaining Client Privacy [8.518748080337838]
This paper studies the tradeoff between model performance and communication of the Federated Learning system.<n>We show that training for one local epoch per global round of training gives optimal performance while preserving the same privacy budget.
arXiv Detail & Related papers (2025-01-24T12:34:38Z) - Using Synthetic Data to Mitigate Unfairness and Preserve Privacy in Collaborative Machine Learning [6.516872951510096]
Collaborative machine learning enables multiple clients to train a global model collaboratively.<n>To preserve privacy in such settings, a common technique is to utilize frequent updates and transmissions of model parameters.<n>We propose a two-stage strategy that promotes fair predictions, prevents client-data leakage, and reduces communication costs.
arXiv Detail & Related papers (2024-09-14T21:04:11Z) - Boosting Communication Efficiency of Federated Learning's Secure Aggregation [22.943966056320424]
Federated Learning (FL) is a decentralized machine learning approach where client devices train models locally and send them to a server.
FL is vulnerable to model inversion attacks, where the server can infer sensitive client data from trained models.
Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue by masking each client's trained model.
This poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol that substantially reduces this overhead.
arXiv Detail & Related papers (2024-05-02T10:00:16Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - LOKI: Large-scale Data Reconstruction Attack against Federated Learning
through Model Manipulation [25.03733882637947]
We introduce LOKI, an attack that overcomes previous limitations and also breaks the anonymity of aggregation.
With FedAVG and aggregation across 100 clients, prior work can leak less than 1% of images on MNIST, CIFAR-100, and Tiny ImageNet.
Using only a single training round, LOKI is able to leak 76-86% of all data samples.
arXiv Detail & Related papers (2023-03-21T23:29:35Z) - Efficient and Privacy Preserving Group Signature for Federated Learning [2.121963121603413]
Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy.
This paper proposes an efficient and privacy-preserving protocol for FL based on group signature.
arXiv Detail & Related papers (2022-07-12T04:12:10Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.