A Practical AoI Scheduler in IoT Networks with Relays
- URL: http://arxiv.org/abs/2203.04227v3
- Date: Tue, 25 Apr 2023 15:32:13 GMT
- Title: A Practical AoI Scheduler in IoT Networks with Relays
- Authors: Biplav Choudhury, Prasenjit Karmakar, Vijay K. Shah, Jeffrey H. Reed
- Abstract summary: Existing literature on traditional AoI schedulers for two-hop relayed IoT networks are limited.
Deep reinforcement learning (DRL) algorithms have been investigated for AoI scheduling in two-hop IoT networks with relays.
This paper presents a practical AoI scheduler for two-hop IoT networks with relays.
- Score: 8.361681706210206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Internet of Things (IoT) networks have become ubiquitous as autonomous
computing, communication and collaboration among devices become popular for
accomplishing various tasks. The use of relays in IoT networks further makes it
convenient to deploy IoT networks as relays provide a host of benefits, like
increasing the communication range and minimizing power consumption. Existing
literature on traditional AoI schedulers for such two-hop relayed IoT networks
are limited because they are designed assuming constant/non-changing channel
conditions and known (usually, generate-at-will) packet generation patterns.
Deep reinforcement learning (DRL) algorithms have been investigated for AoI
scheduling in two-hop IoT networks with relays, however, they are only
applicable for small-scale IoT networks due to exponential rise in action space
as the networks become large. These limitations discourage the practical
utilization of AoI schedulers for IoT network deployments. This paper presents
a practical AoI scheduler for two-hop IoT networks with relays that addresses
the above limitations. The proposed scheduler utilizes a novel voting mechanism
based proximal policy optimization (v-PPO) algorithm that maintains a linear
action space, enabling it be scale well with larger IoT networks. The proposed
v-PPO based AoI scheduler adapts well to changing network conditions and
accounts for unknown traffic generation patterns, making it practical for
real-world IoT deployments. Simulation results show that the proposed v-PPO
based AoI scheduler outperforms both ML and traditional (non-ML) AoI
schedulers, such as, Deep Q Network (DQN)-based AoI Scheduler, Maximal Age
First-Maximal Age Difference (MAF-MAD), MAF (Maximal Age First) , and
round-robin in all considered practical scenarios.
Related papers
- Robust Generalization of Graph Neural Networks for Carrier Scheduling [4.311529300510196]
This paper introduces RobustGANTT, a GNN-based scheduler that improves generalization (without re-training) to networks up to 1000 nodes.
Our work not only improves resource utilization in large-scale backscatter networks, but also offers valuable insights in learning-based scheduling.
arXiv Detail & Related papers (2024-07-11T13:13:24Z) - Device Scheduling and Assignment in Hierarchical Federated Learning for
Internet of Things [20.09415156099031]
This paper proposes an improved K-Center algorithm for device scheduling and introduces a deep reinforcement learning-based approach for assigning IoT devices to edge servers.
Experiments show that scheduling 50% of IoT devices is generally adequate for achieving convergence in HFL with much lower time delay and energy consumption.
arXiv Detail & Related papers (2024-02-04T14:42:13Z) - Digital Twin-Native AI-Driven Service Architecture for Industrial
Networks [2.2924151077053407]
We propose a DT-native AI-driven service architecture in support of the concept of IoT networks.
Within the proposed DT-native architecture, we implement a TCP-based data flow pipeline and a Reinforcement Learning (RL)-based learner model.
arXiv Detail & Related papers (2023-11-24T14:56:13Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - AoI-minimizing Scheduling in UAV-relayed IoT Networks [21.070161851029663]
We propose scheduling policies for Age of Information (AoI) minimization in two-hop UAV-relayed IoT networks.
We show that MAF-MAD is the optimal scheduler under ideal conditions, i.e., error-free channels and generate-at-will traffic.
For realistic conditions, we propose a Deep-Q-Networks (DQN) based scheduler.
arXiv Detail & Related papers (2021-07-12T03:52:59Z) - InstantNet: Automated Generation and Deployment of Instantaneously
Switchable-Precision Networks [65.78061366594106]
We propose InstantNet to automatically generate and deploy instantaneously switchable-precision networks which operate at variable bit-widths.
In experiments, the proposed InstantNet consistently outperforms state-of-the-art designs.
arXiv Detail & Related papers (2021-04-22T04:07:43Z) - Enhanced Pub/Sub Communications for Massive IoT Traffic with SARSA
Reinforcement Learning [0.11470070927586014]
Cloud, edge and fog computing are potential and competitive strategies for collecting, processing, and distributing IoT data.
This paper addresses the issue of conveying a massive volume of IoT data through a network with limited communications resources.
It uses a cognitive communications resource allocation based on Reinforcement Learning (RL) with SARSA algorithm.
arXiv Detail & Related papers (2021-01-03T18:46:01Z) - Autonomous Maintenance in IoT Networks via AoI-driven Deep Reinforcement
Learning [73.85267769520715]
Internet of Things (IoT) with its growing number of deployed devices and applications raises significant challenges for network maintenance procedures.
We formulate a problem of autonomous maintenance in IoT networks as a Partially Observable Markov Decision Process.
We utilize Deep Reinforcement Learning algorithms (DRL) to train agents that decide if a maintenance procedure is in order or not and, in the former case, the proper type of maintenance needed.
arXiv Detail & Related papers (2020-12-31T11:19:51Z) - Optimizing Resource-Efficiency for Federated Edge Intelligence in IoT
Networks [96.24723959137218]
We study an edge intelligence-based IoT network in which a set of edge servers learn a shared model using federated learning (FL)
We propose a novel framework, called federated edge intelligence (FEI), that allows edge servers to evaluate the required number of data samples according to the energy cost of the IoT network.
We prove that our proposed algorithm does not cause any data leakage nor disclose any topological information of the IoT network.
arXiv Detail & Related papers (2020-11-25T12:51:59Z) - Cognitive Radio Network Throughput Maximization with Deep Reinforcement
Learning [58.44609538048923]
Radio Frequency powered Cognitive Radio Networks (RF-CRN) are likely to be the eyes and ears of upcoming modern networks such as Internet of Things (IoT)
To be considered autonomous, the RF-powered network entities need to make decisions locally to maximize the network throughput under the uncertainty of any network environment.
In this paper, deep reinforcement learning is proposed to overcome the shortcomings and allow a wireless gateway to derive an optimal policy to maximize network throughput.
arXiv Detail & Related papers (2020-07-07T01:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.