EdgeML: Towards Network-Accelerated Federated Learning over Wireless
Edge
- URL: http://arxiv.org/abs/2111.09410v1
- Date: Thu, 14 Oct 2021 14:06:57 GMT
- Title: EdgeML: Towards Network-Accelerated Federated Learning over Wireless
Edge
- Authors: Pinyarash Pinyoanuntapong, Prabhu Janakaraj, Ravikumar Balakrishnan,
Minwoo Lee, Chen Chen, and Pu Wang
- Abstract summary: Federated learning (FL) is a distributed machine learning technology for next-generation AI systems.
This paper aims to accelerate FL convergence over wireless edge by optimizing the multi-hop federated networking performance.
- Score: 11.49608766562657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a distributed machine learning technology for
next-generation AI systems that allows a number of workers, i.e., edge devices,
collaboratively learn a shared global model while keeping their data locally to
prevent privacy leakage. Enabling FL over wireless multi-hop networks can
democratize AI and make it accessible in a cost-effective manner. However, the
noisy bandwidth-limited multi-hop wireless connections can lead to delayed and
nomadic model updates, which significantly slows down the FL convergence speed.
To address such challenges, this paper aims to accelerate FL convergence over
wireless edge by optimizing the multi-hop federated networking performance. In
particular, the FL convergence optimization problem is formulated as a Markov
decision process (MDP). To solve such MDP, multi-agent reinforcement learning
(MA-RL) algorithms along with domain-specific action space refining schemes are
developed, which online learn the delay-minimum forwarding paths to minimize
the model exchange latency between the edge devices (i.e., workers) and the
remote server. To validate the proposed solutions, FedEdge is developed and
implemented, which is the first experimental framework in the literature for FL
over multi-hop wireless edge computing networks. FedEdge allows us to fast
prototype, deploy, and evaluate novel FL algorithms along with RL-based system
optimization methods in real wireless devices. Moreover, a physical
experimental testbed is implemented by customizing the widely adopted Linux
wireless routers and ML computing nodes.Finally, our experimentation results on
the testbed show that the proposed network-accelerated FL system can
practically and significantly improve FL convergence speed, compared to the FL
system empowered by the production-grade commercially available wireless
networking protocol, BATMAN-Adv.
Related papers
- Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Digital Over-the-Air Federated Learning in Multi-Antenna Systems [30.137208705209627]
We study the performance optimization of federated learning (FL) over a realistic wireless communication system with digital modulation and over-the-air computation (AirComp)
We propose a modified federated averaging (FedAvg) algorithm that combines digital modulation with AirComp to mitigate wireless fading while ensuring the communication efficiency.
An artificial neural network (ANN) is used to estimate the local FL models of all devices and adjust the beamforming matrices at the PS for future model transmission.
arXiv Detail & Related papers (2023-02-04T07:26:06Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - FedFog: Network-Aware Optimization of Federated Learning over Wireless
Fog-Cloud Systems [40.421253127588244]
Federated learning (FL) is capable of performing large distributed machine learning tasks across multiple edge users by periodically aggregating trained local parameters.
We first propose an efficient FL algorithm (called FedFog) to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud.
arXiv Detail & Related papers (2021-07-04T08:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.