CLIP: Client-Side Invariant Pruning for Mitigating Stragglers in Secure Federated Learning
- URL: http://arxiv.org/abs/2510.16694v1
- Date: Sun, 19 Oct 2025 03:29:37 GMT
- Title: CLIP: Client-Side Invariant Pruning for Mitigating Stragglers in Secure Federated Learning
- Authors: Anthony DiMaggio, Raghav Sharma, Gururaj Saileshwar,
- Abstract summary: This paper introduces the first straggler mitigation technique for secure aggregation with deep neural networks.<n>We propose CLIP, a client-side invariant neuron pruning technique coupled with network-aware pruning, that addresses compute and network bottlenecks due to stragglers during training with minimal accuracy loss.
- Score: 2.7018389296091505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Secure federated learning (FL) preserves data privacy during distributed model training. However, deploying such frameworks across heterogeneous devices results in performance bottlenecks, due to straggler clients with limited computational or network capabilities, slowing training for all participating clients. This paper introduces the first straggler mitigation technique for secure aggregation with deep neural networks. We propose CLIP, a client-side invariant neuron pruning technique coupled with network-aware pruning, that addresses compute and network bottlenecks due to stragglers during training with minimal accuracy loss. Our technique accelerates secure FL training by 13% to 34% across multiple datasets (CIFAR10, Shakespeare, FEMNIST) with an accuracy impact of between 1.3% improvement to 2.6% reduction.
Related papers
- Split Learning-Enabled Framework for Secure and Light-weight Internet of Medical Things Systems [19.2207350991622]
We propose a split learning (SL) based framework for IoT malware detection through image-based classification.<n>By dividing the neural network training between the clients and an edge server, the framework reduces computational burden on resource-constrained clients.<n> Experimental evaluations show that the proposed framework outperforms popular FL methods in terms of accuracy (+6.35%), F1-score (+5.03%), high convergence speed (+14.96%), and less resource consumption (33.83%)
arXiv Detail & Related papers (2025-11-01T00:40:10Z) - ZORRO: Zero-Knowledge Robustness and Privacy for Split Learning (Full Version) [58.595691399741646]
Split Learning (SL) is a distributed learning approach that enables resource-constrained clients to collaboratively train deep neural networks (DNNs)<n>This setup enables SL to leverage server capacities without sharing data, making it highly effective in resource-constrained environments dealing with sensitive data.<n>We present ZORRO, a private, verifiable, and robust SL defense scheme.
arXiv Detail & Related papers (2025-09-11T18:44:09Z) - Pigeon-SL: Robust Split Learning Framework for Edge Intelligence under Malicious Clients [53.496957000114875]
We introduce Pigeon-SL, a novel scheme that guarantees at least one entirely honest cluster among M clients, even when up to N of them are adversarial.<n>In each global round, the access point partitions the clients into N+1 clusters, trains each cluster independently via vanilla SL, and evaluates their validation losses on a shared dataset.<n>Only the cluster with the lowest loss advances, thereby isolating and discarding malicious updates.
arXiv Detail & Related papers (2025-08-04T09:34:50Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Fluent: Round-efficient Secure Aggregation for Private Federated
Learning [23.899922716694427]
Federated learning (FL) facilitates collaborative training of machine learning models among a large number of clients.
FL remains susceptible to vulnerabilities such as privacy inference and inversion attacks.
This work introduces Fluent, a round and communication-efficient secure aggregation scheme for private FL.
arXiv Detail & Related papers (2024-03-10T09:11:57Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent
Kernels [141.29156234353133]
State-of-the-art convex learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions.
We show this disparity can largely be attributed to challenges presented by non-NISTity.
We propose a Train-Convexify neural network (TCT) procedure to sidestep this issue.
arXiv Detail & Related papers (2022-07-13T16:58:22Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.