Beyond Model Scale Limits: End-Edge-Cloud Federated Learning with Self-Rectified Knowledge Agglomeration
- URL: http://arxiv.org/abs/2501.00693v1
- Date: Wed, 01 Jan 2025 01:11:16 GMT
- Title: Beyond Model Scale Limits: End-Edge-Cloud Federated Learning with Self-Rectified Knowledge Agglomeration
- Authors: Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Ke Xu, Quyang Pan, Bo Gao, Tian Wen,
- Abstract summary: We propose End-Edge-Cloud Federated Learning with Self-Rectified Knowledge Agglomeration (FedEEC)
FedEEC is a novel EECC-empowered FL framework that allows the trained models from end, edge, to cloud to grow larger in size and stronger in ability.
- Score: 13.056361971363902
- License:
- Abstract: The rise of End-Edge-Cloud Collaboration (EECC) offers a promising paradigm for Artificial Intelligence (AI) model training across end devices, edge servers, and cloud data centers, providing enhanced reliability and reduced latency. Hierarchical Federated Learning (HFL) can benefit from this paradigm by enabling multi-tier model aggregation across distributed computing nodes. However, the potential of HFL is significantly constrained by the inherent heterogeneity and dynamic characteristics of EECC environments. Specifically, the uniform model structure bounded by the least powerful end device across all computing nodes imposes a performance bottleneck. Meanwhile, coupled heterogeneity in data distributions and resource capabilities across tiers disrupts hierarchical knowledge transfer, leading to biased updates and degraded performance. Furthermore, the mobility and fluctuating connectivity of computing nodes in EECC environments introduce complexities in dynamic node migration, further compromising the robustness of the training process. To address multiple challenges within a unified framework, we propose End-Edge-Cloud Federated Learning with Self-Rectified Knowledge Agglomeration (FedEEC), which is a novel EECC-empowered FL framework that allows the trained models from end, edge, to cloud to grow larger in size and stronger in generalization ability. FedEEC introduces two key innovations: (1) Bridge Sample Based Online Distillation Protocol (BSBODP), which enables knowledge transfer between neighboring nodes through generated bridge samples, and (2) Self-Knowledge Rectification (SKR), which refines the transferred knowledge to prevent suboptimal cloud model optimization. The proposed framework effectively handles both cross-tier resource heterogeneity and effective knowledge transfer between neighboring nodes, while satisfying the migration-resilient requirements of EECC.
Related papers
- SCALE: Self-regulated Clustered federAted LEarning in a Homogeneous Environment [4.925906256430176]
Federated Learning (FL) has emerged as a transformative approach for enabling distributed machine learning while preserving user privacy.
This paper presents a novel FL methodology that overcomes these limitations by eliminating the dependency on edge servers.
arXiv Detail & Related papers (2024-07-25T20:42:16Z) - Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation [56.79064699832383]
We establish a Cloud-Edge Elastic Model Adaptation (CEMA) paradigm in which the edge models only need to perform forward propagation.
In our CEMA, to reduce the communication burden, we devise two criteria to exclude unnecessary samples from uploading to the cloud.
arXiv Detail & Related papers (2024-02-27T08:47:19Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - Agglomerative Federated Learning: Empowering Larger Model Training via End-Edge-Cloud Collaboration [10.90126132493769]
Agglomerative Federated Learning (FedAgg) is a novel EECC-empowered FL framework that allows the trained models from end, edge, to cloud to grow larger in size and stronger in generalization ability.
FedAgg outperforms state-of-the-art methods by an average of 4.53% accuracy gains and remarkable improvements in convergence rate.
arXiv Detail & Related papers (2023-12-01T06:18:45Z) - Federated Learning-Empowered AI-Generated Content in Wireless Networks [58.48381827268331]
Federated learning (FL) can be leveraged to improve learning efficiency and achieve privacy protection for AIGC.
We present FL-based techniques for empowering AIGC, and aim to enable users to generate diverse, personalized, and high-quality content.
arXiv Detail & Related papers (2023-07-14T04:13:11Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - HiFlash: Communication-Efficient Hierarchical Federated Learning with
Adaptive Staleness Control and Heterogeneity-aware Client-Edge Association [38.99309610943313]
Federated learning (FL) is a promising paradigm that enables collaboratively learning a shared model across massive clients.
For many existing FL systems, clients need to frequently exchange model parameters of large data size with the remote cloud server directly via wide-area networks (WAN)
We resort to the hierarchical federated learning paradigm of HiFL, which reaps the benefits of mobile edge computing.
arXiv Detail & Related papers (2023-01-16T14:39:04Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Towards Communication-efficient and Attack-Resistant Federated Edge
Learning for Industrial Internet of Things [40.20432511421245]
Federated Edge Learning (FEL) allows edge nodes to train a global deep learning model collaboratively for edge computing in the Industrial Internet of Things (IIoT)
FEL faces two critical challenges: communication overhead and data privacy.
We propose a communication-efficient and privacy-enhanced asynchronous FEL framework for edge computing in IIoT.
arXiv Detail & Related papers (2020-12-08T14:11:32Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Accelerating Federated Learning over Reliability-Agnostic Clients in
Mobile Edge Computing Systems [15.923599062148135]
Federated learning has emerged as a promising privacy-preserving approach to facilitating AI applications.
It remains a big challenge to optimize the efficiency and effectiveness of FL when it is integrated with the MEC architecture.
In this paper, a multi-layer federated learning protocol called HybridFL is designed for the MEC architecture.
arXiv Detail & Related papers (2020-07-28T17:35:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.