Multi-agent Reinforcement Learning for Resource Allocation in IoT
networks with Edge Computing
- URL: http://arxiv.org/abs/2004.02315v1
- Date: Sun, 5 Apr 2020 20:59:20 GMT
- Title: Multi-agent Reinforcement Learning for Resource Allocation in IoT
networks with Edge Computing
- Authors: Xiaolan Liu, Jiadong Yu, Yue Gao
- Abstract summary: It's challenging for end users to offload computation due to their massive requirements on spectrum and resources.
In this paper, we investigate offloading mechanism with resource allocation in IoT edge computing networks by formulating it as a game.
- Score: 16.129649374251088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To support popular Internet of Things (IoT) applications such as virtual
reality, mobile games and wearable devices, edge computing provides a front-end
distributed computing archetype of centralized cloud computing with low
latency. However, it's challenging for end users to offload computation due to
their massive requirements on spectrum and computation resources and frequent
requests on Radio Access Technology (RAT). In this paper, we investigate
computation offloading mechanism with resource allocation in IoT edge computing
networks by formulating it as a stochastic game. Here, each end user is a
learning agent observing its local environment to learn optimal decisions on
either local computing or edge computing with the goal of minimizing long term
system cost by choosing its transmit power level, RAT and sub-channel without
knowing any information of the other end users. Therefore, a multi-agent
reinforcement learning framework is developed to solve the stochastic game with
a proposed independent learners based multi-agent Q-learning (IL-based MA-Q)
algorithm. Simulations demonstrate that the proposed IL-based MA-Q algorithm is
feasible to solve the formulated problem and is more energy efficient without
extra cost on channel estimation at the centralized gateway compared to the
other two benchmark algorithms.
Related papers
- To Train or Not to Train: Balancing Efficiency and Training Cost in Deep Reinforcement Learning for Mobile Edge Computing [15.079887992932692]
We present a new algorithm to dynamically select when to train a Deep Reinforcement Learning (DRL) agent that allocates resources.
Our method is highly general, as it can be directly applied to any scenario involving a training overhead.
arXiv Detail & Related papers (2024-11-11T16:02:12Z) - Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications [60.63472821600567]
A novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed.
The challenge of efficiently allocating communication and computing resources is addressed through the application of Stackelberg hyper game theory.
Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources.
arXiv Detail & Related papers (2024-09-26T15:55:59Z) - Online Learning for Orchestration of Inference in Multi-User
End-Edge-Cloud Networks [3.6076391721440633]
Collaborative end-edge-cloud computing for deep learning provides a range of performance and efficiency.
We propose a reinforcement-learning-based computation offloading solution that learns optimal offloading policy.
Our solution provides 35% speedup in the average response time compared to the state-of-the-art with less than 0.9% accuracy reduction.
arXiv Detail & Related papers (2022-02-21T21:41:29Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Edge Intelligence for Energy-efficient Computation Offloading and
Resource Allocation in 5G Beyond [7.953533529450216]
5G beyond is an end-edge-cloud orchestrated network that can exploit heterogeneous capabilities of the end devices, edge servers, and the cloud.
In multi user wireless networks, diverse application requirements and the possibility of various radio access modes for communication among devices make it challenging to design an optimal computation offloading scheme.
Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information.
arXiv Detail & Related papers (2020-11-17T05:51:03Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z) - Information Freshness-Aware Task Offloading in Air-Ground Integrated
Edge Computing Systems [49.80033982995667]
This paper studies the problem of information freshness-aware task offloading in an air-ground integrated multi-access edge computing system.
A third-party real-time application service provider provides computing services to the subscribed mobile users (MUs) with the limited communication and computation resources from the InP.
We derive a novel deep reinforcement learning (RL) scheme that adopts two separate double deep Q-networks for each MU to approximate the Q-factor and the post-decision Q-factor.
arXiv Detail & Related papers (2020-07-15T21:32:43Z) - Computation Offloading in Multi-Access Edge Computing Networks: A
Multi-Task Learning Approach [7.203439085947118]
Multi-access edge computing (MEC) has already shown the potential in enabling mobile devices to bear the computation-intensive applications by offloading some tasks to a nearby access point (AP) integrated with a MEC server (MES)
However, due to the varying network conditions and limited computation resources of the MES, the offloading decisions taken by a mobile device and the computational resources allocated by the MES may not be efficiently achieved with the lowest cost.
We propose a dynamic offloading framework for the MEC network, in which the uplink non-orthogonal multiple access (NOMA) is used to enable multiple devices to upload their
arXiv Detail & Related papers (2020-06-29T15:11:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.