Over-the-Air Multi-Task Federated Learning Over MIMO Interference
Channel
- URL: http://arxiv.org/abs/2112.13603v1
- Date: Mon, 27 Dec 2021 10:42:04 GMT
- Title: Over-the-Air Multi-Task Federated Learning Over MIMO Interference
Channel
- Authors: Chenxi Zhong, Huiyuan Yang, and Xiaojun Yuan
- Abstract summary: We study over-the-air multi-task FL (OA-MTFL) over the multiple-input multiple-output (MIMO) interference channel.
We propose a novel model aggregation method for the alignment of local gradients for different devices.
We show that due to the use of the new model aggregation method, device selection is no longer essential to our scheme.
- Score: 17.362158131772127
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the explosive growth of data and wireless devices, federated learning
(FL) has emerged as a promising technology for large-scale intelligent systems.
Utilizing the analog superposition of electromagnetic waves, over-the-air
computation is an appealing approach to reduce the burden of communication in
the FL model aggregation. However, with the urgent demand for intelligent
systems, the training of multiple tasks with over-the-air computation further
aggravates the scarcity of communication resources. This issue can be
alleviated to some extent by training multiple tasks simultaneously with shared
communication resources, but the latter inevitably brings about the problem of
inter-task interference. In this paper, we study over-the-air multi-task FL
(OA-MTFL) over the multiple-input multiple-output (MIMO) interference channel.
We propose a novel model aggregation method for the alignment of local
gradients for different devices, which alleviates the straggler problem that
exists widely in over-the-air computation due to the channel heterogeneity. We
establish a unified communication-computation analysis framework for the
proposed OA-MTFL scheme by considering the spatial correlation between devices,
and formulate an optimization problem of designing transceiver beamforming and
device selection. We develop an algorithm by using alternating optimization
(AO) and fractional programming (FP) to solve this problem, which effectively
relieves the impact of inter-task interference on the FL learning performance.
We show that due to the use of the new model aggregation method, device
selection is no longer essential to our scheme, thereby avoiding the heavy
computational burden caused by implementing device selection. The numerical
results demonstrate the correctness of the analysis and the outstanding
performance of the proposed scheme.
Related papers
- Resource Management for Low-latency Cooperative Fine-tuning of Foundation Models at the Network Edge [35.40849522296486]
Large-scale foundation models (FoMos) can perform human-like intelligence.
FoMos need to be adapted to specialized downstream tasks through fine-tuning techniques.
We advocate multi-device cooperation within the device-edge cooperative fine-tuning paradigm.
arXiv Detail & Related papers (2024-07-13T12:47:14Z) - Random Aggregate Beamforming for Over-the-Air Federated Learning in Large-Scale Networks [66.18765335695414]
We consider a joint device selection and aggregate beamforming design with the objectives of minimizing the aggregate error and maximizing the number of selected devices.
To tackle the problems in a cost-effective manner, we propose a random aggregate beamforming-based scheme.
We additionally use analysis to study the obtained aggregate error and the number of the selected devices when the number of devices becomes large.
arXiv Detail & Related papers (2024-02-20T23:59:45Z) - Device Scheduling for Relay-assisted Over-the-Air Aggregation in
Federated Learning [9.735236606901038]
Federated learning (FL) leverages data distributed at the edge of the network to enable intelligent applications.
In this paper, we propose a relay-assisted FL framework, and investigate the device scheduling problem in relay-assisted FL systems.
arXiv Detail & Related papers (2023-12-15T03:04:39Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Multi-Resource Allocation for On-Device Distributed Federated Learning
Systems [79.02994855744848]
This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system.
Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively.
arXiv Detail & Related papers (2022-11-01T14:16:05Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified
Communication-Learning Design Approach [30.1988598440727]
We develop a unified communication-learning optimization problem to jointly optimize device selection, over-the-air transceiver design, and RIS configuration.
Numerical experiments show that the proposed design achieves substantial learning accuracy improvement compared with the state-of-the-art approaches.
arXiv Detail & Related papers (2020-11-20T08:54:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.