On Federated Learning with Energy Harvesting Clients
- URL: http://arxiv.org/abs/2202.06105v1
- Date: Sat, 12 Feb 2022 17:21:09 GMT
- Title: On Federated Learning with Energy Harvesting Clients
- Authors: Cong Shen, Jing Yang, Jie Xu
- Abstract summary: We propose an energy harvesting federated learning (EHFL) in this paper.
The introduction of a theoretical framework implies that a client's availability to participate in any FL round cannot be guaranteed.
Results suggest that having a uniform client scheduling that maximizes the minimum number of clients throughout the FL process is desirable.
- Score: 23.133518718643643
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Catering to the proliferation of Internet of Things devices and distributed
machine learning at the edge, we propose an energy harvesting federated
learning (EHFL) framework in this paper. The introduction of EH implies that a
client's availability to participate in any FL round cannot be guaranteed,
which complicates the theoretical analysis. We derive novel convergence bounds
that capture the impact of time-varying device availabilities due to the random
EH characteristics of the participating clients, for both parallel and local
stochastic gradient descent (SGD) with non-convex loss functions. The results
suggest that having a uniform client scheduling that maximizes the minimum
number of clients throughout the FL process is desirable, which is further
corroborated by the numerical experiments using a real-world FL task and a
state-of-the-art EH scheduler.
Related papers
- On the Role of Server Momentum in Federated Learning [85.54616432098706]
We propose a general framework for server momentum, that (a) covers a large class of momentum schemes that are unexplored in federated learning (FL)
We provide rigorous convergence analysis for the proposed framework.
arXiv Detail & Related papers (2023-12-19T23:56:49Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Asynchronous Online Federated Learning with Reduced Communication
Requirements [6.282767337715445]
We propose a communication-efficient asynchronous online federated learning (PAO-Fed) strategy.
By reducing the communication overhead of the participants, the proposed method renders participation in the learning task more accessible and efficient.
We conduct comprehensive simulations to study the performance of the proposed method on both synthetic and real-life datasets.
arXiv Detail & Related papers (2023-03-27T14:06:05Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks [88.50555581186799]
We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
arXiv Detail & Related papers (2022-09-27T02:12:43Z) - A Unified Analysis of Federated Learning with Arbitrary Client
Participation [33.86606068136201]
Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency.
It is important to understand how partial client participation affects convergence.
We provide a unified convergence analysis for FL with arbitrary client participation.
arXiv Detail & Related papers (2022-05-26T21:56:31Z) - Combating Client Dropout in Federated Learning via Friend Model
Substitution [8.325089307976654]
Federated learning (FL) is a new distributed machine learning framework known for its benefits on data privacy and communication efficiency.
This paper studies a passive partial client participation scenario that is much less well understood.
We develop a new algorithm FL-FDMS that discovers friends of clients whose data distributions are similar.
Experiments on MNIST and CIFAR-10 confirmed the superior performance of FL-FDMS in handling client dropout in FL.
arXiv Detail & Related papers (2022-05-26T08:34:28Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning [7.802192899666384]
Federated learning enables a cluster of decentralized mobile devices at the edge to collaboratively train a shared machine learning model.
This decentralized training approach is demonstrated as a practical solution to mitigate the risk of privacy leakage.
This paper jointly optimize time-to-convergence and energy efficiency of state-of-the-art FL use cases.
arXiv Detail & Related papers (2021-07-16T23:41:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.