Exploring Deep Reinforcement Learning-Assisted Federated Learning for
Online Resource Allocation in EdgeIoT
- URL: http://arxiv.org/abs/2202.07391v1
- Date: Tue, 15 Feb 2022 13:36:15 GMT
- Title: Exploring Deep Reinforcement Learning-Assisted Federated Learning for
Online Resource Allocation in EdgeIoT
- Authors: Jingjing Zheng, Kai Li, Naram Mhaisen, Wei Ni, Eduardo Tovar, Mohsen
Guizani
- Abstract summary: Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT)
We propose a new federated learning-enabled twin-delayed deep deterministic policy gradient (FLDLT3) framework to achieve the optimal accuracy and energy balance in a continuous domain.
Numerical results demonstrate that the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to existing state-of-the-art benchmark.
- Score: 53.68792408315411
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) has been increasingly considered to preserve data
training privacy from eavesdropping attacks in mobile edge computing-based
Internet of Thing (EdgeIoT). On the one hand, the learning accuracy of FL can
be improved by selecting the IoT devices with large datasets for training,
which gives rise to a higher energy consumption. On the other hand, the energy
consumption can be reduced by selecting the IoT devices with small datasets for
FL, resulting in a falling learning accuracy. In this paper, we formulate a new
resource allocation problem for EdgeIoT to balance the learning accuracy of FL
and the energy consumption of the IoT device. We propose a new federated
learning-enabled twin-delayed deep deterministic policy gradient (FLDLT3)
framework to achieve the optimal accuracy and energy balance in a continuous
domain. Furthermore, long short term memory (LSTM) is leveraged in FL-DLT3 to
predict the time-varying network state while FL-DLT3 is trained to select the
IoT devices and allocate the transmit power. Numerical results demonstrate that
the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while
the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to
existing state-of-the-art benchmark.
Related papers
- FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - HCFL: A High Compression Approach for Communication-Efficient Federated
Learning in Very Large Scale IoT Networks [27.963991995365532]
Federated learning (FL) is a new artificial intelligence concept that enables Internet-of-Things (IoT) devices to learn a collaborative model without sending the raw data to centralized nodes for processing.
Despite numerous advantages, low computing resources at IoT devices and high communication costs for exchanging model parameters make applications of FL in massive IoT networks very limited.
We develop a novel compression scheme for FL, called high-compression federated learning (HCFL), for very large scale IoT networks.
arXiv Detail & Related papers (2022-04-14T05:29:40Z) - On the Tradeoff between Energy, Precision, and Accuracy in Federated
Quantized Neural Networks [68.52621234990728]
Federated learning (FL) over wireless networks requires balancing between accuracy, energy efficiency, and precision.
We propose a quantized FL framework that represents data with a finite level of precision in both local training and uplink transmission.
Our framework can reduce energy consumption by up to 53% compared to a standard FL model.
arXiv Detail & Related papers (2021-11-15T17:00:03Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - FLeet: Online Federated Learning via Staleness Awareness and Performance
Prediction [9.408271687085476]
This paper presents FLeet, the first Online Federated Learning system.
Online FL combines the privacy of Standard FL with the precision of online learning.
I-Prof is a new lightweight profiler that predicts and controls the impact of learning tasks on mobile devices.
AdaSGD is a new adaptive learning algorithm that is resilient to delayed updates.
arXiv Detail & Related papers (2020-06-12T15:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.