Blind Asynchronous Over-the-Air Federated Edge Learning
- URL: http://arxiv.org/abs/2210.17469v1
- Date: Mon, 31 Oct 2022 16:54:14 GMT
- Title: Blind Asynchronous Over-the-Air Federated Edge Learning
- Authors: Saeed Razavikia, Jaume Anguera Peris, Jose Mairton B. da Silva Jr, and
Carlo Fischione
- Abstract summary: Federated Edge Learning (FEEL) is a distributed machine learning technique.
We propose a novel synchronization-free method to recover the parameters of the global model over the air.
Our proposed algorithm is close to the ideal synchronized scenario by $10%$, and performs $4times$ better than the simple case.
- Score: 15.105440618101147
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Edge Learning (FEEL) is a distributed machine learning technique
where each device contributes to training a global inference model by
independently performing local computations with their data. More recently,
FEEL has been merged with over-the-air computation (OAC), where the global
model is calculated over the air by leveraging the superposition of analog
signals. However, when implementing FEEL with OAC, there is the challenge on
how to precode the analog signals to overcome any time misalignment at the
receiver. In this work, we propose a novel synchronization-free method to
recover the parameters of the global model over the air without requiring any
prior information about the time misalignments. For that, we construct a convex
optimization based on the norm minimization problem to directly recover the
global model by solving a convex semi-definite program. The performance of the
proposed method is evaluated in terms of accuracy and convergence via numerical
experiments. We show that our proposed algorithm is close to the ideal
synchronized scenario by $10\%$, and performs $4\times$ better than the simple
case where no recovering method is used.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Semi-Asynchronous Federated Edge Learning Mechanism via Over-the-air
Computation [4.598679151181452]
We propose a semi-asynchronous aggregation FEEL mechanism with AirComp scheme (PAOTA) to improve the training efficiency of the FEEL system.
Our proposed algorithm achieves convergence performance close to that of the ideal Local SGD.
arXiv Detail & Related papers (2023-05-06T15:06:03Z) - Adaptive Sparse Gaussian Process [0.0]
We propose the first adaptive sparse Gaussian Process (GP) able to address all these issues.
We first reformulate a variational sparse GP algorithm to make it adaptive through a forgetting factor.
We then propose updating a single inducing point of the sparse GP model together with the remaining model parameters every time a new sample arrives.
arXiv Detail & Related papers (2023-02-20T21:34:36Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - From Deterioration to Acceleration: A Calibration Approach to
Rehabilitating Step Asynchronism in Federated Optimization [13.755421424240048]
We propose a new algorithm textttFedaGrac, which calibrates the local direction to a predictive global orientation.
We theoretically prove that textttFedaGrac holds an improved order of convergence rate than the state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-17T07:26:31Z) - Stragglers Are Not Disaster: A Hybrid Federated Learning Algorithm with
Delayed Gradients [21.63719641718363]
Federated learning (FL) is a new machine learning framework which trains a joint model across a large amount of decentralized computing devices.
This paper presents a novel FL algorithm, namely Hybrid Federated Learning (HFL), to achieve a learning balance in efficiency and effectiveness.
arXiv Detail & Related papers (2021-02-12T02:27:44Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.