Fast Federated Learning in the Presence of Arbitrary Device
Unavailability
- URL: http://arxiv.org/abs/2106.04159v1
- Date: Tue, 8 Jun 2021 07:46:31 GMT
- Title: Fast Federated Learning in the Presence of Arbitrary Device
Unavailability
- Authors: Xinran Gu, Kaixuan Huang, Jingzhao Zhang, Longbo Huang
- Abstract summary: Federated Learning (FL) coordinates heterogeneous devices to collaboratively train a shared model while preserving user privacy.
One challenge arises when devices drop out of the training process beyond the central server.
We propose Im Federated Apatientaging (MIFA) to solve this problem.
- Score: 26.368873771739715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) coordinates with numerous heterogeneous devices to
collaboratively train a shared model while preserving user privacy. Despite its
multiple advantages, FL faces new challenges. One challenge arises when devices
drop out of the training process beyond the control of the central server. In
this case, the convergence of popular FL algorithms such as FedAvg is severely
influenced by the straggling devices. To tackle this challenge, we study
federated learning algorithms under arbitrary device unavailability and propose
an algorithm named Memory-augmented Impatient Federated Averaging (MIFA). Our
algorithm efficiently avoids excessive latency induced by inactive devices, and
corrects the gradient bias using the memorized latest updates from the devices.
We prove that MIFA achieves minimax optimal convergence rates on non-i.i.d.
data for both strongly convex and non-convex smooth functions. We also provide
an explicit characterization of the improvement over baseline algorithms
through a case study, and validate the results by numerical experiments on
real-world datasets.
Related papers
- FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks [20.048146776405005]
Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions.
This paper presents a new idea with Federated Learning Adjusted leaning ratE (FLR ratE)
Experiments that FLARE consistently outperforms the baselines.
arXiv Detail & Related papers (2024-04-23T07:48:17Z) - Rendering Wireless Environments Useful for Gradient Estimators: A Zero-Order Stochastic Federated Learning Method [14.986031916712108]
Cross-device federated learning (FL) is a growing machine learning framework whereby multiple edge devices collaborate to train a model without disclosing their raw data.
We show how to harness the wireless channel in the learning algorithm itself instead of to analyze it remove its impact.
arXiv Detail & Related papers (2024-01-30T21:46:09Z) - Device Scheduling for Relay-assisted Over-the-Air Aggregation in
Federated Learning [9.735236606901038]
Federated learning (FL) leverages data distributed at the edge of the network to enable intelligent applications.
In this paper, we propose a relay-assisted FL framework, and investigate the device scheduling problem in relay-assisted FL systems.
arXiv Detail & Related papers (2023-12-15T03:04:39Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Communication-Efficient Stochastic Zeroth-Order Optimization for
Federated Learning [28.65635956111857]
Federated learning (FL) enables edge devices to collaboratively train a global model without sharing their private data.
To enhance the training efficiency of FL, various algorithms have been proposed, ranging from first-order computation to first-order methods.
arXiv Detail & Related papers (2022-01-24T08:56:06Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Multi-task Federated Edge Learning (MtFEEL) in Wireless Networks [1.9250873974729816]
Federated Learning (FL) has evolved as a promising technique to handle distributed machine learning across edge devices.
A novel communication efficient FL algorithm for personalised learning in a wireless setting with guarantees is presented.
arXiv Detail & Related papers (2021-08-05T10:54:38Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.