AoI-Aware Task Offloading and Transmission Optimization for Industrial IoT Networks: A Branching Deep Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2510.16414v1
- Date: Sat, 18 Oct 2025 09:14:39 GMT
- Title: AoI-Aware Task Offloading and Transmission Optimization for Industrial IoT Networks: A Branching Deep Reinforcement Learning Approach
- Authors: Yuang Chen, Fengqian Guo, Chang Wu, Shuyi Liu, Hancheng Lu, Chang Wen Chen,
- Abstract summary: In the Industrial Internet of Things (IIoT), the frequent transmission of large amounts of data over wireless networks should meet the stringent timeliness requirements.<n>We propose an age-of-information (AoI)-aware multi-base station (BS) real-time monitoring framework to support extensive IIoT deployments.
- Score: 43.261887758877386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the Industrial Internet of Things (IIoT), the frequent transmission of large amounts of data over wireless networks should meet the stringent timeliness requirements. Particularly, the freshness of packet status updates has a significant impact on the system performance. In this paper, we propose an age-of-information (AoI)-aware multi-base station (BS) real-time monitoring framework to support extensive IIoT deployments. To meet the freshness requirements of IIoT, we formulate a joint task offloading and resource allocation optimization problem with the goal of minimizing long-term average AoI. Tackling the core challenges of combinatorial explosion in multi-BS decision spaces and the stochastic dynamics of IIoT systems is crucial, as these factors render traditional optimization methods intractable. Firstly, an innovative branching-based Dueling Double Deep Q-Network (Branching-D3QN) algorithm is proposed to effectively implement task offloading, which optimizes the convergence performance by reducing the action space complexity from exponential to linear levels. Then, an efficient optimization solution to resource allocation is proposed by proving the semi-definite property of the Hessian matrix of bandwidth and computation resources. Finally, we propose an iterative optimization algorithm for efficient joint task offloading and resource allocation to achieve optimal average AoI performance. Extensive simulations demonstrate that our proposed Branching-D3QN algorithm outperforms both state-of-the-art DRL methods and classical heuristics, achieving up to a 75% enhanced convergence speed and at least a 22% reduction in the long-term average AoI.
Related papers
- Joint Channel Estimation and Computation Offloading in Fluid Antenna-assisted MEC Networks [81.36647816787713]
We propose an FA-assisted offloading framework to minimize the delay of channel estimation.<n>We show that the proposed system significantly reduces the accuracy under efficient communication.
arXiv Detail & Related papers (2025-09-16T08:48:44Z) - Beamforming and Resource Allocation for Delay Minimization in RIS-Assisted OFDM Systems [38.71413228444903]
This paper investigates a joint beamforming and resource allocation problem in downlink reconfigurable intelligent surface (RIS)-assisted OFDM systems.<n>To effectively handle the mixed action space and reduce the state space dimensionality, a hybrid deep reinforcement learning (DRL) approach is proposed.<n>The proposed algorithm significantly reduces the average delay, enhances resource allocation efficiency, and achieves superior system robustness and fairness.
arXiv Detail & Related papers (2025-06-04T05:33:33Z) - Dynamic Operating System Scheduling Using Double DQN: A Reinforcement Learning Approach to Task Optimization [2.2045629562818085]
Experimental results show that the Double DQN algorithm has high scheduling performance under light load, medium load and heavy load scenarios.<n>The algorithm also shows high optimization ability in resource utilization and can intelligently adjust resource allocation according to the system state.<n>Future studies will explore the application of the algorithm in more complex systems, especially cloud computing and large-scale distributed environments.
arXiv Detail & Related papers (2025-03-31T01:48:21Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Network-Aided Intelligent Traffic Steering in 6G O-RAN: A Multi-Layer
Optimization Framework [47.57576667752444]
We jointly optimize the flow-split distribution, congestion control and scheduling (JFCS) to enable an intelligent steering application in open RAN (O-RAN)
Our main contributions are three-fold: i) we propose the novel JFCS framework to efficiently and adaptively direct traffic to appropriate radio units; ii) we develop low-complexity algorithms based on the reinforcement learning, inner approximation and bisection search methods to effectively solve the JFCS problem in different time scales; and iv) the rigorous theoretical performance results are analyzed to show that there exists a scaling factor to improve the tradeoff between delay and utility-optimization
arXiv Detail & Related papers (2023-02-06T11:37:06Z) - Deep Reinforcement Learning for Resource Constrained Multiclass
Scheduling in Wireless Networks [0.0]
In our setup, the available limited bandwidth resources are allocated in order to serve randomly arriving service demands.
We propose a distributional Deep Deterministic Policy Gradient (DDPG) algorithm combined with Deep Sets to tackle the problem.
Our proposed algorithm is tested on both synthetic and real data, showing consistent gains against state-of-the-art conventional methods.
arXiv Detail & Related papers (2020-11-27T09:49:38Z) - Deep Learning-based Resource Allocation For Device-to-Device
Communication [66.74874646973593]
We propose a framework for the optimization of the resource allocation in multi-channel cellular systems with device-to-device (D2D) communication.
A deep learning (DL) framework is proposed, where the optimal resource allocation strategy for arbitrary channel conditions is approximated by deep neural network (DNN) models.
Our simulation results confirm that near-optimal performance can be attained with low time, which underlines the real-time capability of the proposed scheme.
arXiv Detail & Related papers (2020-11-25T14:19:23Z) - Deep Reinforcement Learning for Stochastic Computation Offloading in
Digital Twin Networks [1.0509026467663467]
Digital Twin is a promising technology to empower the digital transformation of Industrial Internet of Things (IIoT)
We first propose a new paradigm Digital Twin Networks (DTN) to build network topology and the task arrival model in IIoT systems.
Then, we formulate the computation offloading and resource allocation problem to minimize the long-term energy efficiency.
arXiv Detail & Related papers (2020-11-17T05:40:16Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.