Relay-Assisted Cooperative Federated Learning
- URL: http://arxiv.org/abs/2107.09518v1
- Date: Tue, 20 Jul 2021 14:06:19 GMT
- Title: Relay-Assisted Cooperative Federated Learning
- Authors: Zehong Lin, Hang Liu, Ying-Jun Angela Zhang
- Abstract summary: Over-the-air computation allows mobile devices to concurrently upload their local models.
Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices.
In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue.
- Score: 10.05493937334448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has recently emerged as a promising technology to
enable artificial intelligence (AI) at the network edge, where distributed
mobile devices collaboratively train a shared AI model under the coordination
of an edge server. To significantly improve the communication efficiency of FL,
over-the-air computation allows a large number of mobile devices to
concurrently upload their local models by exploiting the superposition property
of wireless multi-access channels. Due to wireless channel fading, the model
aggregation error at the edge server is dominated by the weakest channel among
all devices, causing severe straggler issues. In this paper, we propose a
relay-assisted cooperative FL scheme to effectively address the straggler
issue. In particular, we deploy multiple half-duplex relays to cooperatively
assist the devices in uploading the local model updates to the edge server. The
nature of the over-the-air computation poses system objectives and constraints
that are distinct from those in traditional relay communication systems.
Moreover, the strong coupling between the design variables renders the
optimization of such a system challenging. To tackle the issue, we propose an
alternating-optimization-based algorithm to optimize the transceiver and relay
operation with low complexity. Then, we analyze the model aggregation error in
a single-relay case and show that our relay-assisted scheme achieves a smaller
error than the one without relays provided that the relay transmit power and
the relay channel gains are sufficiently large. The analysis provides critical
insights on relay deployment in the implementation of cooperative FL. Extensive
numerical results show that our design achieves faster convergence compared
with state-of-the-art schemes.
Related papers
- Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks [55.467288506826755]
Federated learning (FL) has been recognized as a viable solution for local-privacy-aware collaborative model training in wireless edge networks.
Most existing communication-efficient FL algorithms fail to reduce the significant inter-device variance.
We propose a novel communication-efficient FL algorithm, named FedQVR, which relies on a sophisticated variance-reduced scheme.
arXiv Detail & Related papers (2025-01-20T04:26:21Z) - Federated Split Learning with Model Pruning and Gradient Quantization in Wireless Networks [7.439160287320074]
Federated split learning (FedSL) implements collaborative training across the edge devices and the server through model splitting.
We propose a lightweight FedSL scheme, that further alleviates the training burden on resource-constrained edge devices.
We conduct theoretical analysis to quantify the convergence performance of the proposed scheme.
arXiv Detail & Related papers (2024-12-09T11:43:03Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z) - Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified
Communication-Learning Design Approach [30.1988598440727]
We develop a unified communication-learning optimization problem to jointly optimize device selection, over-the-air transceiver design, and RIS configuration.
Numerical experiments show that the proposed design achieves substantial learning accuracy improvement compared with the state-of-the-art approaches.
arXiv Detail & Related papers (2020-11-20T08:54:13Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.