Pigeon-SL: Robust Split Learning Framework for Edge Intelligence under Malicious Clients
- URL: http://arxiv.org/abs/2508.02235v1
- Date: Mon, 04 Aug 2025 09:34:50 GMT
- Title: Pigeon-SL: Robust Split Learning Framework for Edge Intelligence under Malicious Clients
- Authors: Sangjun Park, Tony Q. S. Quek, Hyowoon Seo,
- Abstract summary: We introduce Pigeon-SL, a novel scheme that guarantees at least one entirely honest cluster among M clients, even when up to N of them are adversarial.<n>In each global round, the access point partitions the clients into N+1 clusters, trains each cluster independently via vanilla SL, and evaluates their validation losses on a shared dataset.<n>Only the cluster with the lowest loss advances, thereby isolating and discarding malicious updates.
- Score: 53.496957000114875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in split learning (SL) have established it as a promising framework for privacy-preserving, communication-efficient distributed learning at the network edge. However, SL's sequential update process is vulnerable to even a single malicious client, which can significantly degrade model accuracy. To address this, we introduce Pigeon-SL, a novel scheme grounded in the pigeonhole principle that guarantees at least one entirely honest cluster among M clients, even when up to N of them are adversarial. In each global round, the access point partitions the clients into N+1 clusters, trains each cluster independently via vanilla SL, and evaluates their validation losses on a shared dataset. Only the cluster with the lowest loss advances, thereby isolating and discarding malicious updates. We further enhance training and communication efficiency with Pigeon-SL+, which repeats training on the selected cluster to match the update throughput of standard SL. We validate the robustness and effectiveness of our approach under three representative attack models -- label flipping, activation and gradient manipulation -- demonstrating significant improvements in accuracy and resilience over baseline SL methods in future intelligent wireless networks.
Related papers
- SMTFL: Secure Model Training to Untrusted Participants in Federated Learning [8.225656436115509]
Federated learning is an essential distributed model training technique.<n> gradient inversion attacks and poisoning attacks pose significant risks to the privacy of training data and the model correctness.<n>We propose a novel approach called SMTFL to achieve secure model training in federated learning without relying on trusted participants.
arXiv Detail & Related papers (2025-02-04T06:12:43Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Blockchain-based Optimized Client Selection and Privacy Preserved
Framework for Federated Learning [2.4201849657206496]
Federated learning is a distributed mechanism that trained large-scale neural network models with the participation of multiple clients.
With this feature, federated learning is considered a secure solution for data privacy issues.
We proposed the blockchain-based optimized client selection and privacy-preserved framework.
arXiv Detail & Related papers (2023-07-25T01:35:51Z) - FedPNN: One-shot Federated Classification via Evolving Clustering Method
and Probabilistic Neural Network hybrid [4.241208172557663]
We propose a two-stage federated learning approach toward the objective of privacy protection.
In the first stage, the synthetic dataset is generated by employing two different distributions as noise.
In the second stage, the Federated Probabilistic Neural Network (FedPNN) is developed and employed for building globally shared classification model.
arXiv Detail & Related papers (2023-04-09T03:23:37Z) - ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical
Consistency for Efficient Semi-supervised Learning [60.57998388590556]
ProtoCon is a novel method for confidence-based pseudo-labeling.
Online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle.
It delivers significant gains and faster convergence over state-of-the-art datasets.
arXiv Detail & Related papers (2023-03-22T23:51:54Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Split Learning without Local Weight Sharing to Enhance Client-side Data Privacy [11.092451849022268]
Split learning (SL) aims to protect user data privacy by distributing deep models between client-server and keeping private data locally.
This paper first reveals data privacy leakage exacerbated from local weight sharing among the clients in SL through model inversion attacks.
We propose and analyze privacy-enhanced SL (P-SL) (or SL without local weight sharing) to reduce the data privacy leakage issue.
arXiv Detail & Related papers (2022-12-01T03:35:14Z) - MUDGUARD: Taming Malicious Majorities in Federated Learning using
Privacy-Preserving Byzantine-Robust Clustering [34.429892915267686]
Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate.
Most existing systems, however, are only robust when most of the clients are honest.
We propose a novel Byzantine-robust and privacy-preserving FL system, called MUDGUARD, that can operate under malicious minority emphor majority in both the server and client sides.
arXiv Detail & Related papers (2022-08-22T09:17:58Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - OpenMatch: Open-set Consistency Regularization for Semi-supervised
Learning with Outliers [71.08167292329028]
We propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch.
OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers.
It achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
arXiv Detail & Related papers (2021-05-28T23:57:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.