Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks
- URL: http://arxiv.org/abs/2408.04705v1
- Date: Thu, 8 Aug 2024 18:05:11 GMT
- Title: Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks
- Authors: Yudi Huang, Tingyang Sun, Ting He,
- Abstract summary: Decentralized federated learning (DFL) has the promise of boosting the deployment of artificial intelligence (AI) by directly learning across distributed agents without centralized coordination.
Most existing solutions were based on the simplistic assumption that neighboring agents are physically adjacent in the underlying communication network.
We jointly design the communication demands and the communication schedule for overlay-based DFL in bandwidth-limited networks without requiring explicit cooperation from the underlying network.
- Score: 3.9162099309900835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emerging machine learning paradigm of decentralized federated learning (DFL) has the promise of greatly boosting the deployment of artificial intelligence (AI) by directly learning across distributed agents without centralized coordination. Despite significant efforts on improving the communication efficiency of DFL, most existing solutions were based on the simplistic assumption that neighboring agents are physically adjacent in the underlying communication network, which fails to correctly capture the communication cost when learning over a general bandwidth-limited network, as encountered in many edge networks. In this work, we address this gap by leveraging recent advances in network tomography to jointly design the communication demands and the communication schedule for overlay-based DFL in bandwidth-limited networks without requiring explicit cooperation from the underlying network. By carefully analyzing the structure of our problem, we decompose it into a series of optimization problems that can each be solved efficiently, to collectively minimize the total training time. Extensive data-driven simulations show that our solution can significantly accelerate DFL in comparison with state-of-the-art designs.
Related papers
- FedsLLM: Federated Split Learning for Large Language Models over Communication Networks [30.47242577997792]
This paper combines low-rank adaptation technology (LoRA) with the splitfed learning framework to propose the federated split learning for large language models (FedsLLM) framework.
The proposed algorithm reduces delays by an average of 47.63% compared to unoptimized scenarios.
arXiv Detail & Related papers (2024-07-12T13:23:54Z) - Coordination-free Decentralised Federated Learning on Complex Networks:
Overcoming Heterogeneity [2.6849848612544]
Federated Learning (FL) is a framework for performing a learning task in an edge computing scenario.
We propose a communication-efficient Decentralised Federated Learning (DFL) algorithm able to cope with them.
Our solution allows devices communicating only with their direct neighbours to train an accurate model.
arXiv Detail & Related papers (2023-12-07T18:24:19Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Communication-Efficient Split Learning Based on Analog Communication and
Over the Air Aggregation [48.150466900765316]
Split-learning (SL) has recently gained popularity due to its inherent privacy-preserving capabilities and ability to enable collaborative inference for devices with limited computational power.
Standard SL algorithms assume an ideal underlying digital communication system and ignore the problem of scarce communication bandwidth.
We propose a novel SL framework to solve the remote inference problem that introduces an additional layer at the agent side and constrains the choices of the weights and the biases to ensure over the air aggregation.
arXiv Detail & Related papers (2021-06-02T07:49:41Z) - Convergence Analysis and System Design for Federated Learning over
Wireless Networks [16.978276697446724]
Federated learning (FL) has emerged as an important and promising learning scheme in IoT.
FL training requires frequent model exchange, which is largely affected by the wireless communication network.
In this paper, we analyze the convergence rate of FL training considering the joint impact of communication network and training settings.
arXiv Detail & Related papers (2021-04-30T02:33:29Z) - Federated Double Deep Q-learning for Joint Delay and Energy Minimization
in IoT networks [12.599009485247283]
We propose a federated deep reinforcement learning framework to solve a multi-objective optimization problem.
To enhance the learning speed of IoT devices (agents), we incorporate federated learning (FDL) at the end of each episode.
Our numerical results demonstrate the efficacy of our proposed federated DDQN framework in terms of learning speed.
arXiv Detail & Related papers (2021-04-02T18:41:59Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.