Over-the-Air Federated Multi-Task Learning via Model Sparsification and
Turbo Compressed Sensing
- URL: http://arxiv.org/abs/2205.03810v1
- Date: Sun, 8 May 2022 08:03:52 GMT
- Title: Over-the-Air Federated Multi-Task Learning via Model Sparsification and
Turbo Compressed Sensing
- Authors: Haoming Ma, Xiaojun Yuan, Zhi Ding, Dian Fan and Jun Fang
- Abstract summary: We propose an over-the-air FMTL framework, where multiple learning tasks deployed on edge devices share a non-orthogonal fading channel under the coordination of an edge server.
In OA-FMTL, the local updates of edge devices are sparsified, compressed, and then sent over the uplink channel in a superimposed fashion.
We analyze the performance of the proposed OA-FMTL framework together with the M-Turbo-CS algorithm.
- Score: 48.19771515107681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To achieve communication-efficient federated multitask learning (FMTL), we
propose an over-the-air FMTL (OAFMTL) framework, where multiple learning tasks
deployed on edge devices share a non-orthogonal fading channel under the
coordination of an edge server (ES). In OA-FMTL, the local updates of edge
devices are sparsified, compressed, and then sent over the uplink channel in a
superimposed fashion. The ES employs over-the-air computation in the presence
of intertask interference. More specifically, the model aggregations of all the
tasks are reconstructed from the channel observations concurrently, based on a
modified version of the turbo compressed sensing (Turbo-CS) algorithm (named as
M-Turbo-CS). We analyze the performance of the proposed OA-FMTL framework
together with the M-Turbo-CS algorithm. Furthermore, based on the analysis, we
formulate a communication-learning optimization problem to improve the system
performance by adjusting the power allocation among the tasks at the edge
devices. Numerical simulations show that our proposed OAFMTL effectively
suppresses the inter-task interference, and achieves a learning performance
comparable to its counterpart with orthogonal multi-task transmission. It is
also shown that the proposed inter-task power allocation optimization algorithm
substantially reduces the overall communication overhead by appropriately
adjusting the power allocation among the tasks.
Related papers
- Resource Management for Low-latency Cooperative Fine-tuning of Foundation Models at the Network Edge [35.40849522296486]
Large-scale foundation models (FoMos) can perform human-like intelligence.
FoMos need to be adapted to specialized downstream tasks through fine-tuning techniques.
We advocate multi-device cooperation within the device-edge cooperative fine-tuning paradigm.
arXiv Detail & Related papers (2024-07-13T12:47:14Z) - Energy-Efficient Power Control for Multiple-Task Split Inference in
UAVs: A Tiny Learning-Based Approach [27.48920259431965]
We present a two-timescale approach for energy minimization in split inference in unmanned aerial vehicles (UAVs)
We replace the optimization of transmit power with that of transmission time to decrease the computational complexity of OP.
Simulation results show that the proposed algorithm can achieve a higher probability of successful task completion with lower energy consumption.
arXiv Detail & Related papers (2023-12-31T10:16:59Z) - High Efficiency Inference Accelerating Algorithm for NOMA-based Mobile
Edge Computing [23.88527790721402]
Splitting the inference model between device, edge server, and cloud can improve the performance of EI greatly.
NOMA, which is the key supporting technologies of B5G/6G, can achieve massive connections and high spectrum efficiency.
We propose the effective communication and computing resource allocation algorithm to accelerate the model inference at edge.
arXiv Detail & Related papers (2023-12-26T02:05:52Z) - Communication-Efficient Framework for Distributed Image Semantic
Wireless Transmission [68.69108124451263]
Federated learning-based semantic communication (FLSC) framework for multi-task distributed image transmission with IoT devices.
Each link is composed of a hierarchical vision transformer (HVT)-based extractor and a task-adaptive translator.
Channel state information-based multiple-input multiple-output transmission module designed to combat channel fading and noise.
arXiv Detail & Related papers (2023-08-07T16:32:14Z) - Hierarchical Over-the-Air FedGradNorm [50.756991828015316]
Multi-task learning (MTL) is a learning paradigm to learn multiple related tasks simultaneously with a single shared network.
We propose hierarchical over-the-air (HOTA) PFL with a dynamic weighting strategy which we call HOTA-FedGradNorm.
arXiv Detail & Related papers (2022-12-14T18:54:46Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Over-the-Air Multi-Task Federated Learning Over MIMO Interference
Channel [17.362158131772127]
We study over-the-air multi-task FL (OA-MTFL) over the multiple-input multiple-output (MIMO) interference channel.
We propose a novel model aggregation method for the alignment of local gradients for different devices.
We show that due to the use of the new model aggregation method, device selection is no longer essential to our scheme.
arXiv Detail & Related papers (2021-12-27T10:42:04Z) - Multi-task Over-the-Air Federated Learning: A Non-Orthogonal
Transmission Approach [52.85647632037537]
We propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES)
Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.
arXiv Detail & Related papers (2021-06-27T13:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.