Dynamic Resource Allocation for Metaverse Applications with Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2302.13445v1
- Date: Mon, 27 Feb 2023 00:30:01 GMT
- Title: Dynamic Resource Allocation for Metaverse Applications with Deep
Reinforcement Learning
- Authors: Nam H. Chu, Diep N. Nguyen, Dinh Thai Hoang, Khoa T. Phan, Eryk
Dutkiewicz, Dusit Niyato, and Tao Shu
- Abstract summary: This work proposes a novel framework to dynamically manage and allocate different types of resources for Metaverse applications.
We first propose an effective solution to divide applications into groups, namely MetaInstances, where common functions can be shared among applications.
Then, to capture the real-time, dynamic, and uncertain characteristics of request arrival and application departure processes, we develop a semi-Markov decision process-based framework.
- Score: 64.75603723249837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes a novel framework to dynamically and effectively manage
and allocate different types of resources for Metaverse applications, which are
forecasted to demand massive resources of various types that have never been
seen before. Specifically, by studying functions of Metaverse applications, we
first propose an effective solution to divide applications into groups, namely
MetaInstances, where common functions can be shared among applications to
enhance resource usage efficiency. Then, to capture the real-time, dynamic, and
uncertain characteristics of request arrival and application departure
processes, we develop a semi-Markov decision process-based framework and
propose an intelligent algorithm that can gradually learn the optimal admission
policy to maximize the revenue and resource usage efficiency for the Metaverse
service provider and at the same time enhance the Quality-of-Service for
Metaverse users. Extensive simulation results show that our proposed approach
can achieve up to 120% greater revenue for the Metaverse service providers and
up to 178.9% higher acceptance probability for Metaverse application requests
than those of other baselines.
Related papers
- Tackling Decision Processes with Non-Cumulative Objectives using Reinforcement Learning [0.0]
We introduce a general mapping of non-cumulative Markov decision processes to standard MDPs.
This allows all techniques developed to find optimal policies for MDPs to be directly applied to the larger class of NCMDPs.
We show applications in a diverse set of tasks, including classical control, portfolio optimization in finance, and discrete optimization problems.
arXiv Detail & Related papers (2024-05-22T13:01:37Z) - RLEMMO: Evolutionary Multimodal Optimization Assisted By Deep Reinforcement Learning [8.389454219309837]
multimodal optimization problems (MMOP) requires finding all optimal solutions, which is challenging in limited function evaluations.
We propose RLEMMO, a Meta-Black-Box Optimization framework, which maintains a population of solutions and incorporates a reinforcement learning agent.
With a novel reward mechanism that encourages both quality and diversity, RLEMMO can be effectively trained using a policy gradient algorithm.
arXiv Detail & Related papers (2024-04-12T05:02:49Z) - MORL-Prompt: An Empirical Analysis of Multi-Objective Reinforcement
Learning for Discrete Prompt Optimization [49.60729578316884]
RL-based techniques can be used to search for prompts that maximize a set of user-specified reward functions.
Current techniques focus on maximizing the average of reward functions, which does not necessarily lead to prompts that achieve balance across rewards.
In this paper, we adapt several techniques for multi-objective optimization to RL-based discrete prompt optimization.
arXiv Detail & Related papers (2024-02-18T21:25:09Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - A Cost-Aware Mechanism for Optimized Resource Provisioning in Cloud
Computing [6.369406986434764]
We have proposed a novel learning based resource provisioning approach that achieves cost-reduction guarantees of demands.
Our method adapts most of the requirements efficiently, and furthermore the resulting performance meets our design goals.
arXiv Detail & Related papers (2023-09-20T13:27:30Z) - Attention-aware Resource Allocation and QoE Analysis for Metaverse
xURLLC Services [78.17423912423999]
We study the interaction between service provider (MSP) and network infrastructure provider (InP)
We propose a novel metric named Meta-DuImmersion that incorporates both objective and subjective feelings of Metaverse users.
We develop an attention-aware rendering capacity allocation scheme to improve QoE in xURLLC.
arXiv Detail & Related papers (2022-08-10T16:51:27Z) - Exploring Attention-Aware Network Resource Allocation for Customized
Metaverse Services [69.37584804990806]
We design an attention-aware network resource allocation scheme to achieve customized Metaverse services.
The aim is to allocate more network resources to virtual objects in which users are more interested.
arXiv Detail & Related papers (2022-07-31T06:04:15Z) - Deep Reinforcement Learning for Resource Allocation in Business
Processes [3.0938904602244355]
We propose a novel representation that allows modeling of a multi-process environment with different process-based rewards.
We then use double deep reinforcement learning to look for optimal resource allocation policy.
Deep reinforcement learning based resource allocation achieved significantly better results than two commonly used techniques.
arXiv Detail & Related papers (2021-03-29T11:20:25Z) - Information Directed Reward Learning for Reinforcement Learning [64.33774245655401]
We learn a model of the reward function that allows standard RL algorithms to achieve high expected return with as few expert queries as possible.
In contrast to prior active reward learning methods designed for specific types of queries, IDRL naturally accommodates different query types.
We support our findings with extensive evaluations in multiple environments and with different types of queries.
arXiv Detail & Related papers (2021-02-24T18:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.