Learn More by Using Less: Distributed Learning with Energy-Constrained Devices
- URL: http://arxiv.org/abs/2412.02289v1
- Date: Tue, 03 Dec 2024 09:06:57 GMT
- Title: Learn More by Using Less: Distributed Learning with Energy-Constrained Devices
- Authors: Roberto Pereira, Cristian J. Vaca-Rubio, Luis Blanco,
- Abstract summary: Federated Learning (FL) has emerged as a solution for distributed model training across decentralized, privacy-preserving devices.
We propose LeanFed, an energy-aware FL framework designed to optimize client selection and training workloads on battery-constrained devices.
- Score: 3.730504020733928
- License:
- Abstract: Federated Learning (FL) has emerged as a solution for distributed model training across decentralized, privacy-preserving devices, but the different energy capacities of participating devices (system heterogeneity) constrain real-world implementations. These energy limitations not only reduce model accuracy but also increase dropout rates, impacting on convergence in practical FL deployments. In this work, we propose LeanFed, an energy-aware FL framework designed to optimize client selection and training workloads on battery-constrained devices. LeanFed leverages adaptive data usage by dynamically adjusting the fraction of local data each device utilizes during training, thereby maximizing device participation across communication rounds while ensuring they do not run out of battery during the process. We rigorously evaluate LeanFed against traditional FedAvg on CIFAR-10 and CIFAR-100 datasets, simulating various levels of data heterogeneity and device participation rates. Results show that LeanFed consistently enhances model accuracy and stability, particularly in settings with high data heterogeneity and limited battery life, by mitigating client dropout and extending device availability. This approach demonstrates the potential of energy-efficient, privacy-preserving FL in real-world, large-scale applications, setting a foundation for robust and sustainable pervasive AI on resource-constrained networks.
Related papers
- Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices [72.61177465035031]
We propose a generative AI-empowered federated learning to address these challenges by leveraging the idea of FIlling the MIssing (FIMI) portion of local data.
Experiment results demonstrate that FIMI can save up to 50% of the device-side energy to achieve the target global test accuracy.
arXiv Detail & Related papers (2023-10-21T12:07:04Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FS-Real: Towards Real-World Cross-Device Federated Learning [60.91678132132229]
Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
arXiv Detail & Related papers (2023-03-23T15:37:17Z) - AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous
Edge Devices [20.52519915112099]
We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates.
Experiment results indicate that, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy.
arXiv Detail & Related papers (2023-01-08T15:25:55Z) - FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient
Federated Learning [11.093360539563657]
Federated learning (FL) has emerged as a solution to deal with the risk of privacy leaks in machine learning training.
We propose FedGPO to optimize the energy-efficiency of FL use cases while guaranteeing model convergence.
In our experiments, FedGPO improves the model convergence time by 2.4 times, and achieves 3.6 times higher energy efficiency over the baseline settings.
arXiv Detail & Related papers (2022-11-30T01:22:57Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - EAFL: Towards Energy-Aware Federated Learning on Battery-Powered Edge
Devices [3.448338949969246]
Federated learning (FL) is a newly emerged branch of AI that facilitates edge devices to collaboratively train a global machine learning model without centralizing data and with privacy by default.
In large-scale deployments, client heterogeneity is the norm which impacts training quality such as accuracy, fairness, and time.
We develop EAFL, an energy-aware FL selection method that considers energy consumption to maximize the participation of heterogeneous target devices.
arXiv Detail & Related papers (2022-08-09T02:15:45Z) - Exploring Deep Reinforcement Learning-Assisted Federated Learning for
Online Resource Allocation in EdgeIoT [53.68792408315411]
Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT)
We propose a new federated learning-enabled twin-delayed deep deterministic policy gradient (FLDLT3) framework to achieve the optimal accuracy and energy balance in a continuous domain.
Numerical results demonstrate that the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to existing state-of-the-art benchmark.
arXiv Detail & Related papers (2022-02-15T13:36:15Z) - AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning [7.802192899666384]
Federated learning enables a cluster of decentralized mobile devices at the edge to collaboratively train a shared machine learning model.
This decentralized training approach is demonstrated as a practical solution to mitigate the risk of privacy leakage.
This paper jointly optimize time-to-convergence and energy efficiency of state-of-the-art FL use cases.
arXiv Detail & Related papers (2021-07-16T23:41:26Z) - Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between
Convergence and Power Transfer [42.30741737568212]
We propose the solution of powering devices using wireless power transfer (WPT)
This work aims at the derivation of guidelines on deploying the resultant wirelessly powered FEEL (WP-FEEL) system.
The results provide useful guidelines on WPT provisioning to provide a guaranteer on learning performance.
arXiv Detail & Related papers (2021-02-24T15:47:34Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.