Federated Double Deep Q-learning for Joint Delay and Energy Minimization
in IoT networks
- URL: http://arxiv.org/abs/2104.11320v1
- Date: Fri, 2 Apr 2021 18:41:59 GMT
- Title: Federated Double Deep Q-learning for Joint Delay and Energy Minimization
in IoT networks
- Authors: Sheyda Zarandi and Hina Tabassum
- Abstract summary: We propose a federated deep reinforcement learning framework to solve a multi-objective optimization problem.
To enhance the learning speed of IoT devices (agents), we incorporate federated learning (FDL) at the end of each episode.
Our numerical results demonstrate the efficacy of our proposed federated DDQN framework in terms of learning speed.
- Score: 12.599009485247283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a federated deep reinforcement learning framework
to solve a multi-objective optimization problem, where we consider minimizing
the expected long-term task completion delay and energy consumption of IoT
devices. This is done by optimizing offloading decisions, computation resource
allocation, and transmit power allocation. Since the formulated problem is a
mixed-integer non-linear programming (MINLP), we first cast our problem as a
multi-agent distributed deep reinforcement learning (DRL) problem and address
it using double deep Q-network (DDQN), where the actions are offloading
decisions. The immediate cost of each agent is calculated through solving
either the transmit power optimization or local computation resource
optimization, based on the selected offloading decisions (actions). Then, to
enhance the learning speed of IoT devices (agents), we incorporate federated
learning (FDL) at the end of each episode. FDL enhances the scalability of the
proposed DRL framework, creates a context for cooperation between agents, and
minimizes their privacy concerns. Our numerical results demonstrate the
efficacy of our proposed federated DDQN framework in terms of learning speed
compared to federated deep Q network (DQN) and non-federated DDQN algorithms.
In addition, we investigate the impact of batch size, network layers, DDQN
target network update frequency on the learning speed of the FDL.
Related papers
- Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks [3.9162099309900835]
Decentralized federated learning (DFL) has the promise of boosting the deployment of artificial intelligence (AI) by directly learning across distributed agents without centralized coordination.
Most existing solutions were based on the simplistic assumption that neighboring agents are physically adjacent in the underlying communication network.
We jointly design the communication demands and the communication schedule for overlay-based DFL in bandwidth-limited networks without requiring explicit cooperation from the underlying network.
arXiv Detail & Related papers (2024-08-08T18:05:11Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Wirelessly Powered Federated Learning Networks: Joint Power Transfer,
Data Sensing, Model Training, and Resource Allocation [24.077525032187893]
Federated learning (FL) has found many successes in wireless networks.
implementation of FL has been hindered by the energy limitation of mobile devices (MDs) and the availability of training data at MDs.
How to integrate wireless power transfer and sustainable sustainable FL networks.
arXiv Detail & Related papers (2023-08-09T13:38:58Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Computation Offloading and Resource Allocation in F-RANs: A Federated
Deep Reinforcement Learning Approach [67.06539298956854]
fog radio access network (F-RAN) is a promising technology in which the user mobile devices (MDs) can offload computation tasks to the nearby fog access points (F-APs)
arXiv Detail & Related papers (2022-06-13T02:19:20Z) - Online Learning for Orchestration of Inference in Multi-User
End-Edge-Cloud Networks [3.6076391721440633]
Collaborative end-edge-cloud computing for deep learning provides a range of performance and efficiency.
We propose a reinforcement-learning-based computation offloading solution that learns optimal offloading policy.
Our solution provides 35% speedup in the average response time compared to the state-of-the-art with less than 0.9% accuracy reduction.
arXiv Detail & Related papers (2022-02-21T21:41:29Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.