Performance Analysis for Resource Constrained Decentralized Federated
Learning Over Wireless Networks
- URL: http://arxiv.org/abs/2308.06496v1
- Date: Sat, 12 Aug 2023 07:56:48 GMT
- Title: Performance Analysis for Resource Constrained Decentralized Federated
Learning Over Wireless Networks
- Authors: Zhigang Yan and Dong Li
- Abstract summary: Decentralized federated learning (DFL) can lead to significant communication overhead and reliance on a central server.
This study analyzes the performance of resource-constrained DFL using different communication schemes (digital and analog) over wireless networks to optimize communication efficiency.
- Score: 4.76281731053599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) can lead to significant communication overhead and
reliance on a central server. To address these challenges, decentralized
federated learning (DFL) has been proposed as a more resilient framework. DFL
involves parameter exchange between devices through a wireless network. This
study analyzes the performance of resource-constrained DFL using different
communication schemes (digital and analog) over wireless networks to optimize
communication efficiency. Specifically, we provide convergence bounds for both
digital and analog transmission approaches, enabling analysis of the model
performance trained on DFL. Furthermore, for digital transmission, we
investigate and analyze resource allocation between computation and
communication and convergence rates, obtaining its communication complexity and
the minimum probability of correction communication required for convergence
guarantee. For analog transmission, we discuss the impact of channel fading and
noise on the model performance and the maximum errors accumulation with
convergence guarantee over fading channels. Finally, we conduct numerical
simulations to evaluate the performance and convergence rate of convolutional
neural networks (CNNs) and Vision Transformer (ViT) trained in the DFL
framework on fashion-MNIST and CIFAR-10 datasets. Our simulation results
validate our analysis and discussion, revealing how to improve performance by
optimizing system parameters under different communication conditions.
Related papers
- Digital versus Analog Transmissions for Federated Learning over Wireless
Networks [91.20926827568053]
We compare two effective communication schemes for wireless federated learning (FL) over resource-constrained networks.
We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints.
A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks.
arXiv Detail & Related papers (2024-02-15T01:50:46Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Federated Learning over Wireless Device-to-Device Networks: Algorithms
and Convergence Analysis [46.76179091774633]
This paper studies federated learning (FL) over wireless device-to-device (D2D) networks.
First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms.
Second, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations.
arXiv Detail & Related papers (2021-01-29T17:42:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.