Emerging Paradigms for Securing Federated Learning Systems
- URL: http://arxiv.org/abs/2509.21147v1
- Date: Thu, 25 Sep 2025 13:34:44 GMT
- Title: Emerging Paradigms for Securing Federated Learning Systems
- Authors: Amr Akmal Abouelmagd, Amr Hilal,
- Abstract summary: Methods such as MPC, Homomorphic Encryption (HE), and Differential Privacy (DP) often incur high compu- tational costs and suffer from limited scalability.<n>This survey examines emerging approaches that hold promise for enhancing both privacy and efficiency in Federated Learning.<n>For each paradigm, we assess its relevance to the FL pipeline, outlining its strengths, limitations, and practical considerations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) facilitates collaborative model training while keeping raw data decentralized, making it a conduit for leveraging the power of IoT devices while maintaining privacy of the locally collected data. However, existing privacy- preserving techniques present notable hurdles. Methods such as Multi-Party Computation (MPC), Homomorphic Encryption (HE), and Differential Privacy (DP) often incur high compu- tational costs and suffer from limited scalability. This survey examines emerging approaches that hold promise for enhancing both privacy and efficiency in FL, including Trusted Execution Environments (TEEs), Physical Unclonable Functions (PUFs), Quantum Computing (QC), Chaos-Based Encryption (CBE), Neuromorphic Computing (NC), and Swarm Intelligence (SI). For each paradigm, we assess its relevance to the FL pipeline, outlining its strengths, limitations, and practical considerations. We conclude by highlighting open challenges and prospective research avenues, offering a detailed roadmap for advancing secure and scalable FL systems.
Related papers
- A Secure and Private Distributed Bayesian Federated Learning Design [56.92336577799572]
Distributed Federated Learning (DFL) enables decentralized model training across large-scale systems without a central parameter server.<n>DFL faces three critical challenges: privacy leakage from honest-but-curious neighbors, slow convergence due to the lack of central coordination, and vulnerability to Byzantine adversaries aiming to degrade model accuracy.<n>We propose a novel DFL framework that integrates Byzantine robustness, privacy preservation, and convergence acceleration.
arXiv Detail & Related papers (2026-02-23T16:12:02Z) - Subgraph Federated Learning via Spectral Methods [52.40322201034717]
FedLap is a novel framework that captures inter-node dependencies while ensuring privacy and scalability.<n>We provide a formal analysis of the privacy of FedLap, demonstrating that it preserves privacy.
arXiv Detail & Related papers (2025-10-29T16:22:32Z) - Closer to Reality: Practical Semi-Supervised Federated Learning for Foundation Model Adaptation [56.36237936346563]
Foundation models (FMs) exhibit remarkable generalization but require adaptation to downstream tasks.<n>Due to data privacy regulations, cloud-based FMs cannot directly access private edge data.<n>We introduce Practical Semi-Supervised Federated Learning (PSSFL), where edge devices hold only unlabeled, low-resolution data.<n>Our work paves the way for scalable and privacy-preserving FM adaptation in federated scenarios.
arXiv Detail & Related papers (2025-08-22T17:47:02Z) - A Privacy-Centric Approach: Scalable and Secure Federated Learning Enabled by Hybrid Homomorphic Encryption [2.611778281107039]
Federated Learning (FL) enables collaborative model training without sharing raw data, making it a promising approach for privacy-sensitive domains.<n>Despite its potential, FL faces significant challenges, particularly in terms of communication overhead and data privacy.<n>In this work, we explore how Hybrid Homomorphic Encryption (HHE), a cryptographic protocol that combines symmetric encryption with HE, can be effectively integrated with FL to address both communication and privacy challenges.
arXiv Detail & Related papers (2025-07-20T07:46:53Z) - FuSeFL: Fully Secure and Scalable Cross-Silo Federated Learning [0.696125353550498]
Federated Learning (FL) enables collaborative model training without centralizing client data, making it attractive for privacy-sensitive domains.<n>We present FuSeFL, a fully secure and scalable FL scheme designed for cross-silo settings.
arXiv Detail & Related papers (2025-07-18T00:50:44Z) - Secure Distributed Learning for CAVs: Defending Against Gradient Leakage with Leveled Homomorphic Encryption [0.0]
Homomorphic Encryption (HE) offers a promising alternative to Differential Privacy (DP) and Secure Multi-Party Computation (SMPC)<n>We evaluate various HE schemes to identify the most suitable for Federated Learning (FL) in resource-constrained environments.<n>We develop a full HE-based FL pipeline that effectively mitigates Deep Leakage from Gradients (DLG) attacks while preserving model accuracy.
arXiv Detail & Related papers (2025-06-09T16:12:18Z) - Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security [17.45950557331482]
Federated learning (FL) enables collaborative model training while preserving user data privacy by keeping data local.<n>Despite these advantages, FL remains vulnerable to privacy attacks on user updates and model parameters during training and deployment.<n>We introduce Beskar, a novel framework that provides post-quantum secure aggregation.
arXiv Detail & Related papers (2025-05-09T03:20:48Z) - TinyML NLP Scheme for Semantic Wireless Sentiment Classification with Privacy Preservation [49.801175302937246]
This study provides insights into deploying privacy-preserving, energy-efficient NLP models on edge devices.<n>We introduce semantic split learning (SL) as an energy-efficient, privacy-preserving tiny machine learning (TinyML) framework.<n>Our results show that SL significantly reduces computational power and CO2 emissions while enhancing privacy, as evidenced by a fourfold increase in reconstruction error compared to FL and nearly eighteen times that of CL.
arXiv Detail & Related papers (2024-11-09T21:26:59Z) - Privacy in Federated Learning [0.0]
Federated Learning (FL) represents a significant advancement in distributed machine learning.
This chapter delves into the core privacy concerns within FL, including the risks of data reconstruction, model inversion attacks, and membership inference.
It examines the trade-offs between model accuracy and privacy, emphasizing the importance of balancing these factors in practical implementations.
arXiv Detail & Related papers (2024-08-12T18:41:58Z) - Personalized Wireless Federated Learning for Large Language Models [75.22457544349668]
Large language models (LLMs) have driven profound transformations in wireless networks.<n>Within wireless environments, the training of LLMs faces significant challenges related to security and privacy.<n>This paper presents a systematic analysis of the training stages of LLMs in wireless networks, including pre-training, instruction tuning, and alignment tuning.
arXiv Detail & Related papers (2024-04-20T02:30:21Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.