EAFL: Towards Energy-Aware Federated Learning on Battery-Powered Edge
Devices
- URL: http://arxiv.org/abs/2208.04505v1
- Date: Tue, 9 Aug 2022 02:15:45 GMT
- Title: EAFL: Towards Energy-Aware Federated Learning on Battery-Powered Edge
Devices
- Authors: Amna Arouj and Ahmed M. Abdelmoniem
- Abstract summary: Federated learning (FL) is a newly emerged branch of AI that facilitates edge devices to collaboratively train a global machine learning model without centralizing data and with privacy by default.
In large-scale deployments, client heterogeneity is the norm which impacts training quality such as accuracy, fairness, and time.
We develop EAFL, an energy-aware FL selection method that considers energy consumption to maximize the participation of heterogeneous target devices.
- Score: 3.448338949969246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a newly emerged branch of AI that facilitates edge
devices to collaboratively train a global machine learning model without
centralizing data and with privacy by default. However, despite the remarkable
advancement, this paradigm comes with various challenges. Specifically, in
large-scale deployments, client heterogeneity is the norm which impacts
training quality such as accuracy, fairness, and time. Moreover, energy
consumption across these battery-constrained devices is largely unexplored and
a limitation for wide-adoption of FL. To address this issue, we develop EAFL,
an energy-aware FL selection method that considers energy consumption to
maximize the participation of heterogeneous target devices. \scheme is a
power-aware training algorithm that cherry-picks clients with higher battery
levels in conjunction with its ability to maximize the system efficiency. Our
design jointly minimizes the time-to-accuracy and maximizes the remaining
on-device battery levels. \scheme improves the testing model accuracy by up to
85\% and decreases the drop-out of clients by up to 2.45$\times$.
Related papers
- A Green Multi-Attribute Client Selection for Over-The-Air Federated Learning: A Grey-Wolf-Optimizer Approach [5.277822313069301]
Over-the-air (OTA) FL was introduced to tackle these challenges by disseminating model updates without direct device-to-device connections or centralized servers.
OTA-FL brought forth limitations associated with heightened energy consumption and network latency.
We propose a multi-attribute client selection framework employing the grey wolf (GWO) to strategically control the number of participants in each round.
arXiv Detail & Related papers (2024-09-16T20:03:57Z) - E-QUARTIC: Energy Efficient Edge Ensemble of Convolutional Neural Networks for Resource-Optimized Learning [9.957458251671486]
Ensembling models like Convolutional Neural Networks (CNNs) result in high memory and computing overhead, preventing their deployment in embedded systems.
We propose E-QUARTIC, a novel Energy Efficient Edge Ensembling framework to build ensembles of CNNs targeting Artificial Intelligence (AI)-based embedded systems.
arXiv Detail & Related papers (2024-09-12T19:30:22Z) - Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices [72.61177465035031]
We propose a generative AI-empowered federated learning to address these challenges by leveraging the idea of FIlling the MIssing (FIMI) portion of local data.
Experiment results demonstrate that FIMI can save up to 50% of the device-side energy to achieve the target global test accuracy.
arXiv Detail & Related papers (2023-10-21T12:07:04Z) - FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems [61.335229621081346]
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge.
In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities.
arXiv Detail & Related papers (2023-06-08T13:11:20Z) - FedLE: Federated Learning Client Selection with Lifespan Extension for
Edge IoT Networks [34.63384007690422]
Federated learning (FL) is a distributed and privacy-preserving learning framework for predictive modeling with massive data generated at the edge by Internet of Things (IoT) devices.
One major challenge preventing the wide adoption of FL in IoT is the pervasive power supply constraints of IoT devices.
We propose FedLE, an energy-efficient client selection framework that enables extension of edge IoT networks.
arXiv Detail & Related papers (2023-02-14T19:41:24Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Exploring Deep Reinforcement Learning-Assisted Federated Learning for
Online Resource Allocation in EdgeIoT [53.68792408315411]
Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT)
We propose a new federated learning-enabled twin-delayed deep deterministic policy gradient (FLDLT3) framework to achieve the optimal accuracy and energy balance in a continuous domain.
Numerical results demonstrate that the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to existing state-of-the-art benchmark.
arXiv Detail & Related papers (2022-02-15T13:36:15Z) - Energy-Efficient Multi-Orchestrator Mobile Edge Learning [54.28419430315478]
Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
arXiv Detail & Related papers (2021-09-02T07:37:10Z) - Threshold-Based Data Exclusion Approach for Energy-Efficient Federated
Edge Learning [4.25234252803357]
Federated edge learning (FEEL) is a promising distributed learning technique for next-generation wireless networks.
FEEL might significantly shorten energy-constrained participating devices' lifetime due to the power consumed during the model training round.
This paper proposes a novel approach that endeavors to minimize computation and communication energy consumption during FEEL rounds.
arXiv Detail & Related papers (2021-03-30T13:34:40Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - Accelerating Federated Learning over Reliability-Agnostic Clients in
Mobile Edge Computing Systems [15.923599062148135]
Federated learning has emerged as a promising privacy-preserving approach to facilitating AI applications.
It remains a big challenge to optimize the efficiency and effectiveness of FL when it is integrated with the MEC architecture.
In this paper, a multi-layer federated learning protocol called HybridFL is designed for the MEC architecture.
arXiv Detail & Related papers (2020-07-28T17:35:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.