Battery-aware Cyclic Scheduling in Energy-harvesting Federated Learning
- URL: http://arxiv.org/abs/2504.12181v1
- Date: Wed, 16 Apr 2025 15:38:38 GMT
- Title: Battery-aware Cyclic Scheduling in Energy-harvesting Federated Learning
- Authors: Eunjeong Jeong, Nikolaos Pappas,
- Abstract summary: Federated Learning (FL) has emerged as a promising framework for distributed learning, but its growing complexity has led to significant energy consumption.<n>We propose FedBacys, a battery-aware FL framework that introduces cyclic client participation based on users' battery levels.<n>This work presents the first comprehensive evaluation of cyclic client participation in EHFL, incorporating both communication and computation costs into a unified, resource-aware scheduling strategy.
- Score: 8.616027454535876
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) has emerged as a promising framework for distributed learning, but its growing complexity has led to significant energy consumption, particularly from computations on the client side. This challenge is especially critical in energy-harvesting FL (EHFL) systems, where device availability fluctuates due to limited and time-varying energy resources. We propose FedBacys, a battery-aware FL framework that introduces cyclic client participation based on users' battery levels to cope with these issues. FedBacys enables clients to save energy and strategically perform local training just before their designated transmission time by clustering clients and scheduling their involvement sequentially. This design minimizes redundant computation, reduces system-wide energy usage, and improves learning stability. Our experiments demonstrate that FedBacys outperforms existing approaches in terms of energy efficiency and performance consistency, exhibiting robustness even under non-i.i.d. training data distributions and with very infrequent battery charging. This work presents the first comprehensive evaluation of cyclic client participation in EHFL, incorporating both communication and computation costs into a unified, resource-aware scheduling strategy.
Related papers
- Learn More by Using Less: Distributed Learning with Energy-Constrained Devices [3.730504020733928]
Federated Learning (FL) has emerged as a solution for distributed model training across decentralized, privacy-preserving devices.<n>We propose LeanFed, an energy-aware FL framework designed to optimize client selection and training workloads on battery-constrained devices.
arXiv Detail & Related papers (2024-12-03T09:06:57Z) - Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning [51.02352381270177]
Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology.
The choice of the cut layer in SFL can have a substantial impact on the energy consumption of clients and their privacy.
This article provides a comprehensive overview of the SFL process and thoroughly analyze energy consumption and privacy.
arXiv Detail & Related papers (2023-11-15T23:23:42Z) - Energy-Aware Federated Learning with Distributed User Sampling and
Multichannel ALOHA [3.7769304982979666]
Distributed learning on edge devices has attracted increased attention with the advent of federated learning (FL)
This letter considers the integration of energy harvesting (EH) devices into a FL network with multi-channel ALOHA.
Numerical results demonstrate the effectiveness of this method, particularly in critical setups.
arXiv Detail & Related papers (2023-09-12T08:05:39Z) - A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks [53.561797148529664]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner.
Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified.
The current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization.
arXiv Detail & Related papers (2023-06-25T13:10:38Z) - FedZero: Leveraging Renewable Excess Energy in Federated Learning [4.741052304881078]
Federated Learning (FL) is an emerging machine learning technique that enables distributed model training across data silos or edge devices without data sharing.
One idea to reduce FL's carbon footprint is to schedule training jobs based on the availability of renewable excess energy.
We propose FedZero, an FL system that operates exclusively on renewable excess energy and spare capacity of compute infrastructure.
arXiv Detail & Related papers (2023-05-24T12:17:30Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - FedTrees: A Novel Computation-Communication Efficient Federated Learning
Framework Investigated in Smart Grids [8.437758224218648]
Next-generation smart meters can be used to measure, record, and report energy consumption data.
FedTrees is a new, lightweight FL framework that benefits from the outstanding features of ensemble learning.
arXiv Detail & Related papers (2022-09-30T19:47:46Z) - EAFL: Towards Energy-Aware Federated Learning on Battery-Powered Edge
Devices [3.448338949969246]
Federated learning (FL) is a newly emerged branch of AI that facilitates edge devices to collaboratively train a global machine learning model without centralizing data and with privacy by default.
In large-scale deployments, client heterogeneity is the norm which impacts training quality such as accuracy, fairness, and time.
We develop EAFL, an energy-aware FL selection method that considers energy consumption to maximize the participation of heterogeneous target devices.
arXiv Detail & Related papers (2022-08-09T02:15:45Z) - Threshold-Based Data Exclusion Approach for Energy-Efficient Federated
Edge Learning [4.25234252803357]
Federated edge learning (FEEL) is a promising distributed learning technique for next-generation wireless networks.
FEEL might significantly shorten energy-constrained participating devices' lifetime due to the power consumed during the model training round.
This paper proposes a novel approach that endeavors to minimize computation and communication energy consumption during FEEL rounds.
arXiv Detail & Related papers (2021-03-30T13:34:40Z) - A Framework for Energy and Carbon Footprint Analysis of Distributed and
Federated Edge Learning [48.63610479916003]
This article breaks down and analyzes the main factors that influence the environmental footprint of distributed learning policies.
It models both vanilla and decentralized FL policies driven by consensus.
Results show that FL allows remarkable end-to-end energy savings (30%-40%) for wireless systems characterized by low bit/Joule efficiency.
arXiv Detail & Related papers (2021-03-18T16:04:42Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.