AirFL-Mem: Improving Communication-Learning Trade-Off by Long-Term
Memory
- URL: http://arxiv.org/abs/2310.16606v2
- Date: Sat, 28 Oct 2023 02:44:22 GMT
- Title: AirFL-Mem: Improving Communication-Learning Trade-Off by Long-Term
Memory
- Authors: Haifeng Wen, Hong Xing, Osvaldo Simeone
- Abstract summary: We propose AirFL-Mem, a novel scheme designed to mitigate fading by implementing a emphlong-term memory mechanism.
The theoretical results are also leveraged to propose a novel convex optimization strategy for the truncation threshold used for power control in the presence of fading channels.
- Score: 37.43361910009644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Addressing the communication bottleneck inherent in federated learning (FL),
over-the-air FL (AirFL) has emerged as a promising solution, which is, however,
hampered by deep fading conditions. In this paper, we propose AirFL-Mem, a
novel scheme designed to mitigate the impact of deep fading by implementing a
\emph{long-term} memory mechanism. Convergence bounds are provided that account
for long-term memory, as well as for existing AirFL variants with short-term
memory, for general non-convex objectives. The theory demonstrates that
AirFL-Mem exhibits the same convergence rate of federated averaging (FedAvg)
with ideal communication, while the performance of existing schemes is
generally limited by error floors. The theoretical results are also leveraged
to propose a novel convex optimization strategy for the truncation threshold
used for power control in the presence of Rayleigh fading channels.
Experimental results validate the analysis, confirming the advantages of a
long-term memory mechanism for the mitigation of deep fading.
Related papers
- Optimal Transceiver Design in Over-the-Air Federated Distillation [34.09979141255862]
We study the transceiver design in terms of the learning convergence rate and the power constraints.<n>We propose a novel approach to find the optimal receiver beam vector for over-the-air aggregation.<n>Results show that the proposed over-the-air approach achieves a significant reduction in communication with only a minor compromise in testing accuracy.
arXiv Detail & Related papers (2025-07-21T05:37:08Z) - Lightweight Federated Learning over Wireless Edge Networks [83.4818741890634]
Federated (FL) is an alternative at network edge, but an alternative in wireless networks.<n>We derive a closed-form expression FL convergence gap transmission power, model pruning error, and quantization.<n> LTFL outperforms state-the-art schemes in experiments on real-world datasets.
arXiv Detail & Related papers (2025-07-13T09:14:17Z) - LoLaFL: Low-Latency Federated Learning via Forward-only Propagation [25.99531618965931]
Federated learning (FL) has emerged as a widely adopted paradigm for enabling edge learning with distributed data.
Traditional FL with deep neural networks trained via backpropagation can hardly meet the low-latency learning requirements in the sixth generation (6G) mobile networks.
We adopt the state-of-the-art principle of maximal coding rate reduction to learn linear discriminative features and extend the resultant white-box neural network into FL.
arXiv Detail & Related papers (2024-12-19T09:20:27Z) - Integrated Sensing and Communications for Low-Altitude Economy: A Deep Reinforcement Learning Approach [20.36806314683902]
We study an integrated sensing and communications (ISAC) system for low-altitude economy (LAE)
The expected communication sum-rate over a given flight period is maximized by jointly optimizing the beamforming at the GBS and UAVs' trajectories.
We propose a novel LAE-oriented ISAC scheme, referred to as Deep LAE-ISAC (DeepLSC), by leveraging the deep reinforcement learning (DRL) technique.
arXiv Detail & Related papers (2024-12-05T11:12:46Z) - FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion [48.90879664138855]
One-shot Federated Learning (OFL) significantly reduces communication costs in FL by aggregating trained models only once.
However, the performance of advanced OFL methods is far behind the normal FL.
We propose a novel learning approach to endow OFL with superb performance and low communication and storage costs, termed as FuseFL.
arXiv Detail & Related papers (2024-10-27T09:07:10Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - Personalized Federated Learning via ADMM with Moreau Envelope [11.558467367982924]
We propose an alternating direction method of multipliers (ADMM) for training PFL models with Moreau envelope (FLAME)
Our theoretical analysis establishes the global convergence under both unbiased and biased client selection strategies.
Our experiments validate that FLAME, when trained on heterogeneous data, outperforms state-of-the-art methods in terms of model performance.
arXiv Detail & Related papers (2023-11-12T07:13:37Z) - Convergence Analysis of Over-the-Air FL with Compression and Power
Control via Clipping [30.958677272798617]
We make two contributions to the development of AirFL based on norm clipping.
First, we provide a convergence bound for AirFLClip that applies to general smooth learning objectives.
Second, we extend AirFL-Clip-Comp to include Top-k sparsification and linear compression.
arXiv Detail & Related papers (2023-05-18T17:30:27Z) - Spectrum Breathing: Protecting Over-the-Air Federated Learning Against Interference [73.63024765499719]
Mobile networks can be compromised by interference from neighboring cells or jammers.
We propose Spectrum Breathing, which cascades-gradient pruning and spread spectrum to suppress interference without bandwidth expansion.
We show a performance tradeoff between gradient-pruning and interference-induced error as regulated by the breathing depth.
arXiv Detail & Related papers (2023-05-10T07:05:43Z) - Delay-Aware Hierarchical Federated Learning [7.292078085289465]
The paper introduces delay-aware hierarchical federated learning (DFL) to improve the efficiency of distributed machine learning (ML) model training.
During global synchronization, the cloud server consolidates local models with an outdated global model using a convex control algorithm.
Numerical evaluations show DFL's superior performance in terms of faster global model, reduced convergence resource, and evaluations against communication delays.
arXiv Detail & Related papers (2023-03-22T09:23:29Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.