P3SL: Personalized Privacy-Preserving Split Learning on Heterogeneous Edge Devices
- URL: http://arxiv.org/abs/2507.17228v2
- Date: Tue, 05 Aug 2025 02:08:01 GMT
- Title: P3SL: Personalized Privacy-Preserving Split Learning on Heterogeneous Edge Devices
- Authors: Wei Fan, JinYi Yoon, Xiaochang Li, Huajie Shao, Bo Ji,
- Abstract summary: Split Learning (SL) enables resource constrained edge devices to participate in model training by partitioning a model into client-side and server-side sub-models.<n>SL encounters significant challenges in heterogeneous environments where devices vary in computing resources, communication capabilities, environmental conditions, and privacy requirements.<n>We propose P3SL, a Personalized Privacy-Preserving Split Learning framework designed for heterogeneous, resource-constrained edge device systems.
- Score: 12.821321451464081
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Split Learning (SL) is an emerging privacy-preserving machine learning technique that enables resource constrained edge devices to participate in model training by partitioning a model into client-side and server-side sub-models. While SL reduces computational overhead on edge devices, it encounters significant challenges in heterogeneous environments where devices vary in computing resources, communication capabilities, environmental conditions, and privacy requirements. Although recent studies have explored heterogeneous SL frameworks that optimize split points for devices with varying resource constraints, they often neglect personalized privacy requirements and local model customization under varying environmental conditions. To address these limitations, we propose P3SL, a Personalized Privacy-Preserving Split Learning framework designed for heterogeneous, resource-constrained edge device systems. The key contributions of this work are twofold. First, we design a personalized sequential split learning pipeline that allows each client to achieve customized privacy protection and maintain personalized local models tailored to their computational resources, environmental conditions, and privacy needs. Second, we adopt a bi-level optimization technique that empowers clients to determine their own optimal personalized split points without sharing private sensitive information (i.e., computational resources, environmental conditions, privacy requirements) with the server. This approach balances energy consumption and privacy leakage risks while maintaining high model accuracy. We implement and evaluate P3SL on a testbed consisting of 7 devices including 4 Jetson Nano P3450 devices, 2 Raspberry Pis, and 1 laptop, using diverse model architectures and datasets under varying environmental conditions.
Related papers
- CoSteer: Collaborative Decoding-Time Personalization via Local Delta Steering [68.91862701376155]
CoSteer is a novel collaborative framework that enables decoding-time personalization through localized delta steering.<n>We formulate token-level optimization as an online learning problem, where local delta vectors dynamically adjust the remote LLM's logits.<n>This approach preserves privacy by transmitting only the final steered tokens rather than raw data or intermediate vectors.
arXiv Detail & Related papers (2025-07-07T08:32:29Z) - Machine Learning with Privacy for Protected Attributes [56.44253915927481]
We refine the definition of differential privacy (DP) to create a more general and flexible framework that we call feature differential privacy (FDP)<n>Our definition is simulation-based and allows for both addition/removal and replacement variants of privacy, and can handle arbitrary separation of protected and non-protected features.<n>We apply our framework to various machine learning tasks and show that it can significantly improve the utility of DP-trained models when public features are available.
arXiv Detail & Related papers (2025-06-24T17:53:28Z) - Privacy-preserving Prompt Personalization in Federated Learning for Multimodal Large Language Models [12.406403248205285]
Federated prompt personalization (FPP) is developed to address data heterogeneity and local overfitting.<n>We propose SecFPP, a secure FPP protocol harmonizing personalization, and privacy guarantees.<n>We show SecFPP significantly outperforms both non-private and privacy-preserving baselines.
arXiv Detail & Related papers (2025-05-28T15:09:56Z) - Multi-Objective Optimization for Privacy-Utility Balance in Differentially Private Federated Learning [12.278668095136098]
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data.<n>We propose an adaptive clipping mechanism that dynamically adjusts the clipping norm using a multi-objective optimization framework.<n>Our results show that adaptive clipping consistently outperforms fixed-clipping baselines, achieving improved accuracy under the same privacy constraints.
arXiv Detail & Related papers (2025-03-27T04:57:05Z) - TinyML NLP Scheme for Semantic Wireless Sentiment Classification with Privacy Preservation [49.801175302937246]
This study provides insights into deploying privacy-preserving, energy-efficient NLP models on edge devices.<n>We introduce semantic split learning (SL) as an energy-efficient, privacy-preserving tiny machine learning (TinyML) framework.<n>Our results show that SL significantly reduces computational power and CO2 emissions while enhancing privacy, as evidenced by a fourfold increase in reconstruction error compared to FL and nearly eighteen times that of CL.
arXiv Detail & Related papers (2024-11-09T21:26:59Z) - Personalized Federated Learning for Cross-view Geo-localization [49.40531019551957]
We propose a methodology combining Federated Learning (FL) with Cross-view Image Geo-localization (CVGL) techniques.
Our method implements a coarse-to-fine approach, where clients share only the coarse feature extractors while keeping fine-grained features specific to local environments.
Results demonstrate that our federated CVGL method achieves performance close to centralized training while maintaining data privacy.
arXiv Detail & Related papers (2024-11-07T13:25:52Z) - FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity [82.5448598805968]
We present an effective and adaptable federated framework FedP3, representing Federated Personalized and Privacy-friendly network Pruning.
We offer a theoretical interpretation of FedP3 and its locally differential-private variant, DP-FedP3, and theoretically validate their efficiencies.
arXiv Detail & Related papers (2024-04-15T14:14:05Z) - Libertas: Privacy-Preserving Collective Computation for Decentralised Personal Data Stores [18.91869691495181]
We introduce a modular architecture, Libertas, to integrate MPC with PDS like Solid.<n>We introduce a paradigm shift from an omniscient' view to individual-based, user-centric view of trust and security.
arXiv Detail & Related papers (2023-09-28T12:07:40Z) - Evaluating Privacy Leakage in Split Learning [8.841387955312669]
On-device machine learning allows us to avoid sharing raw data with a third-party server during inference.
Split Learning (SL) is a promising approach that can overcome limitations.
In SL, a large machine learning model is divided into two parts, with the bigger part residing on the server side and a smaller part executing on-device.
arXiv Detail & Related papers (2023-05-22T13:00:07Z) - P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless
Setups [5.601217969637838]
P4L is a privacy preserving peer-to-peer learning system for users to participate in an asynchronous, collaborative learning scheme.
Our design uses strong cryptographic primitives to preserve both the confidentiality and utility of the shared gradients.
arXiv Detail & Related papers (2023-02-26T23:30:18Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Unsupervised Model Personalization while Preserving Privacy and
Scalability: An Open Problem [55.21502268698577]
This work investigates the task of unsupervised model personalization, adapted to continually evolving, unlabeled local user images.
We provide a novel Dual User-Adaptation framework (DUA) to explore the problem.
This framework flexibly disentangles user-adaptation into model personalization on the server and local data regularization on the user device.
arXiv Detail & Related papers (2020-03-30T09:35:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.