Asynchronous Online Federated Learning with Reduced Communication
Requirements
- URL: http://arxiv.org/abs/2303.15226v2
- Date: Tue, 11 Apr 2023 12:33:42 GMT
- Title: Asynchronous Online Federated Learning with Reduced Communication
Requirements
- Authors: Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner,
Yih-Fang Huang, Anthony Kuh
- Abstract summary: We propose a communication-efficient asynchronous online federated learning (PAO-Fed) strategy.
By reducing the communication overhead of the participants, the proposed method renders participation in the learning task more accessible and efficient.
We conduct comprehensive simulations to study the performance of the proposed method on both synthetic and real-life datasets.
- Score: 6.282767337715445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online federated learning (FL) enables geographically distributed devices to
learn a global shared model from locally available streaming data. Most online
FL literature considers a best-case scenario regarding the participating
clients and the communication channels. However, these assumptions are often
not met in real-world applications. Asynchronous settings can reflect a more
realistic environment, such as heterogeneous client participation due to
available computational power and battery constraints, as well as delays caused
by communication channels or straggler devices. Further, in most applications,
energy efficiency must be taken into consideration. Using the principles of
partial-sharing-based communications, we propose a communication-efficient
asynchronous online federated learning (PAO-Fed) strategy. By reducing the
communication overhead of the participants, the proposed method renders
participation in the learning task more accessible and efficient. In addition,
the proposed aggregation mechanism accounts for random participation, handles
delayed updates and mitigates their effect on accuracy. We prove the first and
second-order convergence of the proposed PAO-Fed method and obtain an
expression for its steady-state mean square deviation. Finally, we conduct
comprehensive simulations to study the performance of the proposed method on
both synthetic and real-life datasets. The simulations reveal that in
asynchronous settings, the proposed PAO-Fed is able to achieve the same
convergence properties as that of the online federated stochastic gradient
while reducing the communication overhead by 98 percent.
Related papers
- REFOL: Resource-Efficient Federated Online Learning for Traffic Flow Forecasting [22.118392944492964]
Multiple federated learning (FL) methods are proposed for traffic flow forecasting (TFF) to avoid heavy-transmission and privacy-leaking concerns.
Online learning can detect concept drift during model training, thus more applicable to TFF.
We propose a novel method named Resource-Efficient Federated Online Learning (REFOL) for TFF, which guarantees prediction performance in a communication-lightweight and computation-efficient way.
arXiv Detail & Related papers (2024-11-21T11:50:17Z) - DynamicFL: Federated Learning with Dynamic Communication Resource Allocation [34.97472382870816]
Federated Learning (FL) is a collaborative machine learning framework that allows multiple users to train models utilizing their local data in a distributed manner.
We introduce DynamicFL, a new FL framework that investigates the trade-offs between global model performance and communication costs.
We show that DynamicFL surpasses current state-of-the-art methods with up to a 10% increase in model accuracy.
arXiv Detail & Related papers (2024-09-08T05:53:32Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Communication-Efficient Consensus Mechanism for Federated Reinforcement
Learning [20.891460617583302]
We show that FL can improve the policy performance of IRL in terms of training efficiency and stability.
To reach a good balance between improving the model's convergence performance and reducing the required communication and computation overheads, this paper proposes a system utility function.
arXiv Detail & Related papers (2022-01-30T04:04:24Z) - Resource-Aware Asynchronous Online Federated Learning for Nonlinear
Regression [5.194557636096977]
asynchronous online federated learning (ASO-Fed)
We use the principles of partial-sharing-based communication to reduce the communication overhead associated with ASO-Fed.
In the asynchronous setting, it is possible to achieve the same convergence as the federated gradient (Online-FedSGD)
arXiv Detail & Related papers (2021-11-27T16:41:30Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.