Joint Optimization of Communications and Federated Learning Over the Air
- URL: http://arxiv.org/abs/2104.03490v1
- Date: Thu, 8 Apr 2021 03:38:31 GMT
- Title: Joint Optimization of Communications and Federated Learning Over the Air
- Authors: Xin Fan, Yue Wang, Yan Huo, and Zhi Tian
- Abstract summary: Federated learning (FL) is an attractive paradigm for making use of rich distributed data while protecting data privacy.
In this paper, we study joint optimization of communications and FL based on analog aggregation transmission in realistic wireless networks.
- Score: 32.14738452396869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an attractive paradigm for making use of rich
distributed data while protecting data privacy. Nonetheless, nonideal
communication links and limited transmission resources have become the
bottleneck of the implementation of fast and accurate FL. In this paper, we
study joint optimization of communications and FL based on analog aggregation
transmission in realistic wireless networks. We first derive a closed-form
expression for the expected convergence rate of FL over the air, which
theoretically quantifies the impact of analog aggregation on FL. Based on the
analytical result, we develop a joint optimization model for accurate FL
implementation, which allows a parameter server to select a subset of workers
and determine an appropriate power scaling factor. Since the practical setting
of FL over the air encounters unobservable parameters, we reformulate the joint
optimization of worker selection and power allocation using controlled
approximation. Finally, we efficiently solve the resulting mixed-integer
programming problem via a simple yet optimal finite-set search method by
reducing the search space. Simulation results show that the proposed solutions
developed for realistic wireless analog channels outperform a benchmark method,
and achieve comparable performance of the ideal case where FL is implemented
over noise-free wireless channels.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Federated Learning with Flexible Control [30.65854375019346]
Federated learning (FL) enables distributed model training from local data collected by users.
In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem.
We propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly.
arXiv Detail & Related papers (2022-12-16T14:21:29Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - 1-Bit Compressive Sensing for Efficient Federated Learning Over the Air [32.14738452396869]
This paper develops and analyzes a communication-efficient scheme for learning (FL) over the air, which incorporates 1-bit sensing (CS) into analog aggregation transmissions.
For scalable computing, we develop an efficient implementation that is suitable for large-scale networks.
Simulation results show that our proposed 1-bit CS based FL over the air achieves comparable performance to the ideal case.
arXiv Detail & Related papers (2021-03-30T03:50:31Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.