Towards Resilient Federated Learning in CyberEdge Networks: Recent Advances and Future Trends
- URL: http://arxiv.org/abs/2504.01240v1
- Date: Tue, 01 Apr 2025 23:06:45 GMT
- Title: Towards Resilient Federated Learning in CyberEdge Networks: Recent Advances and Future Trends
- Authors: Kai Li, Zhengyang Zhang, Azadeh Pourkabirian, Wei Ni, Falko Dressler, Ozgur B. Akan,
- Abstract summary: We investigate the most recent techniques of resilient federated learning (ResFL) in CyberEdge networks.<n>We focus on joint training with agglomerative deduction and feature-oriented security mechanisms.<n>These advancements offer ultra-low latency, artificial intelligence (AI)-driven network management, and improved resilience against adversarial attacks.
- Score: 20.469263896950437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this survey, we investigate the most recent techniques of resilient federated learning (ResFL) in CyberEdge networks, focusing on joint training with agglomerative deduction and feature-oriented security mechanisms. We explore adaptive hierarchical learning strategies to tackle non-IID data challenges, improving scalability and reducing communication overhead. Fault tolerance techniques and agglomerative deduction mechanisms are studied to detect unreliable devices, refine model updates, and enhance convergence stability. Unlike existing FL security research, we comprehensively analyze feature-oriented threats, such as poisoning, inference, and reconstruction attacks that exploit model features. Moreover, we examine resilient aggregation techniques, anomaly detection, and cryptographic defenses, including differential privacy and secure multi-party computation, to strengthen FL security. In addition, we discuss the integration of 6G, large language models (LLMs), and interoperable learning frameworks to enhance privacy-preserving and decentralized cross-domain training. These advancements offer ultra-low latency, artificial intelligence (AI)-driven network management, and improved resilience against adversarial attacks, fostering the deployment of secure ResFL in CyberEdge networks.
Related papers
- Federated Learning-Driven Cybersecurity Framework for IoT Networks with Privacy-Preserving and Real-Time Threat Detection Capabilities [0.0]
Traditional centralized security methods often struggle to balance privacy preservation and real-time threat detection in IoT networks.<n>This study proposes a Federated Learning-Driven Cybersecurity Framework designed specifically for IoT environments.<n>Secure aggregation of locally trained models is achieved using homomorphic encryption, allowing collaborative learning without exposing sensitive information.
arXiv Detail & Related papers (2025-02-14T23:11:51Z) - Adaptive Cybersecurity: Dynamically Retrainable Firewalls for Real-Time Network Protection [4.169915659794567]
This research introduces "Dynamically Retrainable Firewalls"<n>Unlike traditional firewalls that rely on static rules to inspect traffic, these advanced systems leverage machine learning algorithms to analyze network traffic pattern dynamically and identify threats.<n>It also discusses strategies to improve performance, reduce latency, optimize resource utilization, and address integration issues with present-day concepts such as Zero Trust and mixed environments.
arXiv Detail & Related papers (2025-01-14T00:04:35Z) - A Review of the Duality of Adversarial Learning in Network Intrusion: Attacks and Countermeasures [0.0]
Adversarial attacks, particularly those targeting vulnerabilities in deep learning models, present a nuanced and substantial threat to cybersecurity.<n>Our study delves into adversarial learning threats such as Data Poisoning, Test Time Evasion, and Reverse Engineering.<n>Our research lays the groundwork for strengthening defense mechanisms to address the potential breaches in network security and privacy posed by adversarial attacks.
arXiv Detail & Related papers (2024-12-18T14:21:46Z) - Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks [3.489779105594534]
backdoors can be exploited by malicious actors on deep neural networks (DNNs) and cloud services for data processing.
Our approach leverages advanced tensor decomposition algorithms to meticulously analyze the weights of pre-trained DNNs and distinguish between backdoored and clean models.
This advancement enhances the security of deep learning and AI in networked systems, providing essential cybersecurity against evolving threats in emerging technologies.
arXiv Detail & Related papers (2024-03-13T03:10:11Z) - Generative AI for Secure Physical Layer Communications: A Survey [80.0638227807621]
Generative Artificial Intelligence (GAI) stands at the forefront of AI innovation, demonstrating rapid advancement and unparalleled proficiency in generating diverse content.
In this paper, we offer an extensive survey on the various applications of GAI in enhancing security within the physical layer of communication networks.
We delve into the roles of GAI in addressing challenges of physical layer security, focusing on communication confidentiality, authentication, availability, resilience, and integrity.
arXiv Detail & Related papers (2024-02-21T06:22:41Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Federated Learning-Empowered AI-Generated Content in Wireless Networks [58.48381827268331]
Federated learning (FL) can be leveraged to improve learning efficiency and achieve privacy protection for AIGC.
We present FL-based techniques for empowering AIGC, and aim to enable users to generate diverse, personalized, and high-quality content.
arXiv Detail & Related papers (2023-07-14T04:13:11Z) - Few-shot Weakly-supervised Cybersecurity Anomaly Detection [1.179179628317559]
We propose an enhancement to an existing few-shot weakly-supervised deep learning anomaly detection framework.
This framework incorporates data augmentation, representation learning and ordinal regression.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2023-04-15T04:37:54Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.