FedNC: A Secure and Efficient Federated Learning Method with Network
Coding
- URL: http://arxiv.org/abs/2305.03292v3
- Date: Tue, 9 Jan 2024 03:20:48 GMT
- Title: FedNC: A Secure and Efficient Federated Learning Method with Network
Coding
- Authors: Yuchen Shi, Zheqi Zhu, Pingyi Fan, Khaled B. Letaief and Chenghui Peng
- Abstract summary: Federated Learning (FL) is a promising distributed learning mechanism which faces two major challenges, namely privacy breaches and system efficiency.
In this work, we reconceptualize the FL system from the perspective of network information theory, and formulate an original FL communication framework, FedNC, which is inspired by Network Coding (NC)
- Score: 18.556693764310328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a promising distributed learning mechanism which
still faces two major challenges, namely privacy breaches and system
efficiency. In this work, we reconceptualize the FL system from the perspective
of network information theory, and formulate an original FL communication
framework, FedNC, which is inspired by Network Coding (NC). The main idea of
FedNC is mixing the information of the local models by making random linear
combinations of the original parameters, before uploading for further
aggregation. Due to the benefits of the coding scheme, both theoretical and
experimental analysis indicate that FedNC improves the performance of
traditional FL in several important ways, including security, efficiency, and
robustness. To the best of our knowledge, this is the first framework where NC
is introduced in FL. As FL continues to evolve within practical network
frameworks, more variants can be further designed based on FedNC.
Related papers
- Federated Learning in Practice: Reflections and Projections [17.445826363802997]
Federated Learning (FL) is a machine learning technique that enables multiple entities to collaboratively learn a shared model without exchanging their local data.
Production systems from organizations like Google, Apple, and Meta demonstrate the real-world applicability of FL.
We propose a redefined FL framework that prioritizes privacy principles rather than rigid definitions.
arXiv Detail & Related papers (2024-10-11T15:10:38Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Towards Cooperative Federated Learning over Heterogeneous Edge/Fog
Networks [49.19502459827366]
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
Traditional implementations of FL have largely neglected the potential for inter-network cooperation.
We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions.
arXiv Detail & Related papers (2023-03-15T04:41:36Z) - FedLP: Layer-wise Pruning Mechanism for Communication-Computation
Efficient Federated Learning [15.665720478360557]
Federated learning (FL) has prevailed as an efficient and privacy-preserved scheme for distributed learning.
We formulate an explicit FL pruning framework, FedLP (Federated Layer-wise Pruning), which is model-agnostic and universal for different types of deep learning models.
arXiv Detail & Related papers (2023-03-11T09:57:00Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Federated Distillation based Indoor Localization for IoT Networks [7.219077740523683]
Federated distillation (FD) paradigm has been recently proposed as a promising alternative to federated learning (FL)
In this work, we propose an FD framework that properly operates on regression learning problems.
We show that the proposed framework is much more scalable than FL, thus more likely to cope with the expansion of wireless networks.
arXiv Detail & Related papers (2022-05-23T16:32:52Z) - FedComm: Federated Learning as a Medium for Covert Communication [56.376997104843355]
Federated Learning (FL) is a solution to mitigate the privacy implications related to the adoption of deep learning.
This paper thoroughly investigates the communication capabilities of an FL scheme.
We introduce FedComm, a novel multi-system covert-communication technique.
arXiv Detail & Related papers (2022-01-21T17:05:56Z) - On-the-fly Resource-Aware Model Aggregation for Federated Learning in
Heterogeneous Edge [15.932747809197517]
Edge computing has revolutionized the world of mobile and wireless networks world thanks to its flexible, secure, and performing characteristics.
In this paper, we conduct an in-depth study of strategies to replace a central aggregation server with a flying master.
Our results demonstrate a significant reduction of runtime using our flying master FL framework compared to the original FL from measurements results conducted in our EdgeAI testbed and over real 5G networks.
arXiv Detail & Related papers (2021-12-21T19:04:42Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.