Fractional Deep Reinforcement Learning for Age-Minimal Mobile Edge
Computing
- URL: http://arxiv.org/abs/2312.10418v2
- Date: Tue, 19 Dec 2023 13:11:49 GMT
- Title: Fractional Deep Reinforcement Learning for Age-Minimal Mobile Edge
Computing
- Authors: Lyudong Jin, Ming Tang, Meng Zhang, Hao Wang
- Abstract summary: This work focuses on the timeliness of computational-intensive updates, measured by Age-ofInformation (AoI)
We study how to jointly optimize the task updating and offloading policies for AoI with fractional form.
Experimental results show that our proposed algorithms reduce the average AoI by up to 57.6% compared with several non-fractional benchmarks.
- Score: 11.403989519949173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile edge computing (MEC) is a promising paradigm for real-time
applications with intensive computational needs (e.g., autonomous driving), as
it can reduce the processing delay. In this work, we focus on the timeliness of
computational-intensive updates, measured by Age-ofInformation (AoI), and study
how to jointly optimize the task updating and offloading policies for AoI with
fractional form. Specifically, we consider edge load dynamics and formulate a
task scheduling problem to minimize the expected time-average AoI. The
uncertain edge load dynamics, the nature of the fractional objective, and
hybrid continuous-discrete action space (due to the joint optimization) make
this problem challenging and existing approaches not directly applicable. To
this end, we propose a fractional reinforcement learning(RL) framework and
prove its convergence. We further design a model-free fractional deep RL (DRL)
algorithm, where each device makes scheduling decisions with the hybrid action
space without knowing the system dynamics and decisions of other devices.
Experimental results show that our proposed algorithms reduce the average AoI
by up to 57.6% compared with several non-fractional benchmarks.
Related papers
- Asynchronous Fractional Multi-Agent Deep Reinforcement Learning for Age-Minimal Mobile Edge Computing [14.260646140460187]
We study the timeliness of computational-intensive updates and explore jointly optimize the task updating and offloading policies to minimize AoI.
Specifically, we consider edge load dynamics and formulate a task scheduling problem to minimize the expected time-average AoI.
Our proposed algorithms reduce the average AoI by up to 52.6% compared with the best baseline algorithm in our experiments.
arXiv Detail & Related papers (2024-09-25T11:33:32Z) - When to Sense and Control? A Time-adaptive Approach for Continuous-Time RL [37.58940726230092]
Reinforcement learning (RL) excels in optimizing policies for discrete-time Markov decision processes (MDP)
We formalize an RL framework, Time-adaptive Control & Sensing (TaCoS), that tackles this challenge.
We demonstrate that state-of-the-art RL algorithms trained on TaCoS drastically reduce the interaction amount over their discrete-time counterpart.
arXiv Detail & Related papers (2024-06-03T09:57:18Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Offloading and Quality Control for AI Generated Content Services in 6G Mobile Edge Computing Networks [18.723955271182007]
This paper proposes a joint optimization algorithm for offloading decisions, computation time, and diffusion steps of the diffusion models in the reverse diffusion stage.
Experimental results conclusively demonstrate that the proposed algorithm achieves superior joint optimization performance compared to the baselines.
arXiv Detail & Related papers (2023-12-11T08:36:27Z) - Age-Based Scheduling for Mobile Edge Computing: A Deep Reinforcement
Learning Approach [58.911515417156174]
We propose a new definition of Age of Information (AoI) and, based on the redefined AoI, we formulate an online AoI problem for MEC systems.
We introduce Post-Decision States (PDSs) to exploit the partial knowledge of the system's dynamics.
We also combine PDSs with deep RL to further improve the algorithm's applicability, scalability, and robustness.
arXiv Detail & Related papers (2023-12-01T01:30:49Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - MCDS: AI Augmented Workflow Scheduling in Mobile Edge Cloud Computing
Systems [12.215537834860699]
Recently proposed scheduling methods leverage the low response times of edge computing platforms to optimize application Quality of Service (QoS)
We propose MCDS: Monte Carlo Learning using Deep Surrogate Models to efficiently schedule workflow applications in mobile edge-cloud computing systems.
arXiv Detail & Related papers (2021-12-14T10:00:01Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.