How Does Cell-Free Massive MIMO Support Multiple Federated Learning
Groups?
- URL: http://arxiv.org/abs/2107.09577v1
- Date: Tue, 20 Jul 2021 15:46:53 GMT
- Title: How Does Cell-Free Massive MIMO Support Multiple Federated Learning
Groups?
- Authors: Tung T. Vu, Hien Quoc Ngo, Thomas L. Marzetta, Michail Matthaiou
- Abstract summary: We propose a cell-free massive multiple-input multiple-output (MIMO) network to guarantee the stable operation of multiple FL processes.
We then develop a novel scheme that asynchronously executes the iterations of FL processes under multicasting downlink and conventional uplink transmission protocols.
- Score: 42.63398054091038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has been considered as a promising learning framework
for future machine learning systems due to its privacy preservation and
communication efficiency. In beyond-5G/6G systems, it is likely to have
multiple FL groups with different learning purposes. This scenario leads to a
question: How does a wireless network support multiple FL groups? As an answer,
we first propose to use a cell-free massive multiple-input multiple-output
(MIMO) network to guarantee the stable operation of multiple FL processes by
letting the iterations of these FL processes be executed together within a
large-scale coherence time. We then develop a novel scheme that asynchronously
executes the iterations of FL processes under multicasting downlink and
conventional uplink transmission protocols. Finally, we propose a
simple/low-complexity resource allocation algorithm which optimally chooses the
power and computation resources to minimize the execution time of each
iteration of each FL process.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Joint Energy and Latency Optimization in Federated Learning over Cell-Free Massive MIMO Networks [36.6868658064971]
Federated learning (FL) is a distributed learning paradigm wherein users exchange FL models with a server instead of raw datasets.
Cell-free massive multiple-input multiple-output(CFmMIMO) is a promising architecture for implementing FL because it serves many users on the same time/frequency resources.
We propose an uplink power allocation scheme in FL over CFmMIMO by considering the effect of each user's power on the energy and latency of other users.
arXiv Detail & Related papers (2024-04-28T19:24:58Z) - Synergies Between Federated Learning and O-RAN: Towards an Elastic Architecture for Multiple Distributed Machine Learning Services [7.057114677579558]
Federated learning (FL) over 5G-and-beyond wireless networks is a popular distributed machine learning (ML) technique.
implementation of FL over 5G-and-beyond wireless networks faces key challenges caused by (i) dynamics of the wireless network conditions and (ii) the coexistence of multiple FL-services in the system.
We first take a closer look into these challenges and unveil nuanced phenomena called over-/under-provisioning of resources and perspective-driven load balancing.
We then take the first steps towards addressing these phenomena by proposing a novel distributed ML architecture called elastic FL (EFL)
arXiv Detail & Related papers (2023-04-14T19:21:42Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Confederated Learning: Federated Learning with Decentralized Edge
Servers [42.766372620288585]
Federated learning (FL) is an emerging machine learning paradigm that allows to accomplish model training without aggregating data at a central server.
We propose a ConFederated Learning (CFL) framework, in which each server is connected with an individual set of devices.
The proposed algorithm employs a random scheduling policy which randomly selects a subset of devices to access their respective servers at each iteration.
arXiv Detail & Related papers (2022-05-30T07:56:58Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Papaya: Practical, Private, and Scalable Federated Learning [6.833772874570774]
Cross-device Federated Learning (FL) is a distributed learning paradigm with several challenges.
Most FL systems described in the literature are synchronous - they perform a synchronized aggregation of model updates from individual clients.
In this work, we outline a production asynchronous FL system design.
arXiv Detail & Related papers (2021-11-08T23:46:42Z) - Device Scheduling and Update Aggregation Policies for Asynchronous
Federated Learning [72.78668894576515]
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework.
We propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems.
arXiv Detail & Related papers (2021-07-23T18:57:08Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.