Convergence Analysis of Over-the-Air FL with Compression and Power
Control via Clipping
- URL: http://arxiv.org/abs/2305.11135v1
- Date: Thu, 18 May 2023 17:30:27 GMT
- Title: Convergence Analysis of Over-the-Air FL with Compression and Power
Control via Clipping
- Authors: Haifeng Wen, Hong Xing, and Osvaldo Simeone
- Abstract summary: We make two contributions to the development of AirFL based on norm clipping.
First, we provide a convergence bound for AirFLClip that applies to general smooth learning objectives.
Second, we extend AirFL-Clip-Comp to include Top-k sparsification and linear compression.
- Score: 30.958677272798617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the key challenges towards the deployment of over-the-air federated
learning (AirFL) is the design of mechanisms that can comply with the power and
bandwidth constraints of the shared channel, while causing minimum
deterioration to the learning performance as compared to baseline noiseless
implementations. For additive white Gaussian noise (AWGN) channels with
instantaneous per-device power constraints, prior work has demonstrated the
optimality of a power control mechanism based on norm clipping. This was done
through the minimization of an upper bound on the optimality gap for smooth
learning objectives satisfying the Polyak-{\L}ojasiewicz (PL) condition. In
this paper, we make two contributions to the development of AirFL based on norm
clipping, which we refer to as AirFL-Clip. First, we provide a convergence
bound for AirFLClip that applies to general smooth and non-convex learning
objectives. Unlike existing results, the derived bound is free from
run-specific parameters, thus supporting an offline evaluation. Second, we
extend AirFL-Clip to include Top-k sparsification and linear compression. For
this generalized protocol, referred to as AirFL-Clip-Comp, we derive a
convergence bound for general smooth and non-convex learning objectives. We
argue, and demonstrate via experiments, that the only time-varying quantities
present in the bound can be efficiently estimated offline by leveraging the
well-studied properties of sparse recovery algorithms.
Related papers
- Lightweight Federated Learning over Wireless Edge Networks [83.4818741890634]
Federated (FL) is an alternative at network edge, but an alternative in wireless networks.<n>We derive a closed-form expression FL convergence gap transmission power, model pruning error, and quantization.<n> LTFL outperforms state-the-art schemes in experiments on real-world datasets.
arXiv Detail & Related papers (2025-07-13T09:14:17Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks [20.048146776405005]
Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions.
This paper presents a new idea with Federated Learning Adjusted leaning ratE (FLR ratE)
Experiments that FLARE consistently outperforms the baselines.
arXiv Detail & Related papers (2024-04-23T07:48:17Z) - AirFL-Mem: Improving Communication-Learning Trade-Off by Long-Term
Memory [37.43361910009644]
We propose AirFL-Mem, a novel scheme designed to mitigate fading by implementing a emphlong-term memory mechanism.
The theoretical results are also leveraged to propose a novel convex optimization strategy for the truncation threshold used for power control in the presence of fading channels.
arXiv Detail & Related papers (2023-10-25T12:51:38Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air
Federated Learning [31.966999085992505]
Federated learning (FL) is a privacy-preserving distributed training scheme.
We propose a device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise distortion.
arXiv Detail & Related papers (2023-05-26T12:04:59Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Over-the-Air Federated Learning with Joint Adaptive Computation and
Power Control [30.7130850260223]
Over-the-air computation learning (OTA-FL) is considered in this paper.
OTA-FL exploits the superposition property of the wireless medium and performs aggregation over the air for free.
arXiv Detail & Related papers (2022-05-12T03:28:03Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z) - Gradient Statistics Aware Power Control for Over-the-Air Federated
Learning [59.40860710441232]
Federated learning (FL) is a promising technique that enables many edge devices to train a machine learning model collaboratively in wireless networks.
This paper studies the power control problem for over-the-air FL by taking gradient statistics into account.
arXiv Detail & Related papers (2020-03-04T14:06:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.