A Centralized Reinforcement Learning Framework for Adaptive Clustering
with Low Control Overhead in IoT Networks
- URL: http://arxiv.org/abs/2401.15767v1
- Date: Sun, 28 Jan 2024 21:08:45 GMT
- Title: A Centralized Reinforcement Learning Framework for Adaptive Clustering
with Low Control Overhead in IoT Networks
- Authors: F. Fernando Jurado-Lasso, J. F. Jurado, and Xenofon Fafoutis
- Abstract summary: This paper introduces Low-Energy Clustering Hierarchy with Reinforcement Adaptive Learning-based Controller (LEACH-RLC)
LEACH-RLC employs a Mixed Linear Programming (MILP) for strategic selection of cluster heads (CHs) and node-to-cluster assignments.
It integrates a Reinforcement Learning (RL) agent to minimize control overhead by learning optimal timings for generating new clusters.
Results demonstrate the superior performance of LEACH-RLC over conventional LEACH and LEACH-C, showcasing enhanced network lifetime, reduced average energy consumption, and minimized control overhead.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wireless Sensor Networks (WSNs) play a pivotal role in enabling Internet of
Things (IoT) devices with sensing and actuation capabilities. Operating in
remote and resource-constrained environments, these IoT devices face challenges
related to energy consumption, crucial for network longevity. Clustering
protocols have emerged as an effective solution to alleviate energy burdens on
IoT devices. This paper introduces Low-Energy Adaptive Clustering Hierarchy
with Reinforcement Learning-based Controller (LEACH-RLC), a novel clustering
protocol that employs a Mixed Integer Linear Programming (MILP) for strategic
selection of cluster heads (CHs) and node-to-cluster assignments. Additionally,
it integrates a Reinforcement Learning (RL) agent to minimize control overhead
by learning optimal timings for generating new clusters. Addressing key
research questions, LEACH-RLC seeks to balance control overhead reduction
without compromising overall network performance. Through extensive
simulations, this paper investigates the frequency and opportune moments for
generating new clustering solutions. Results demonstrate the superior
performance of LEACH-RLC over conventional LEACH and LEACH-C, showcasing
enhanced network lifetime, reduced average energy consumption, and minimized
control overhead. The proposed protocol contributes to advancing the efficiency
and adaptability of WSNs, addressing critical challenges in IoT deployments.
Related papers
- SCALE: Self-regulated Clustered federAted LEarning in a Homogeneous Environment [4.925906256430176]
Federated Learning (FL) has emerged as a transformative approach for enabling distributed machine learning while preserving user privacy.
This paper presents a novel FL methodology that overcomes these limitations by eliminating the dependency on edge servers.
arXiv Detail & Related papers (2024-07-25T20:42:16Z) - Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach [51.63921041249406]
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)
deploying STAR-RIS indoors presents challenges in interference mitigation, power consumption, and real-time configuration.
A novel network architecture utilizing multiple access points (APs), STAR-RISs, and NOMA is proposed for indoor communication.
arXiv Detail & Related papers (2024-06-19T07:17:04Z) - PeersimGym: An Environment for Solving the Task Offloading Problem with Reinforcement Learning [2.0249250133493195]
We introduce PeersimGym, an open-source, customizable simulation environment tailored for developing and optimizing task offloading strategies within computational networks.
PeersimGym supports a wide range of network topologies and computational constraints and integrates a textitPettingZoo-based interface for RL agent deployment in both solo and multi-agent setups.
We demonstrate the utility of the environment through experiments with Deep Reinforcement Learning agents, showcasing the potential of RL-based approaches to significantly enhance offloading strategies in distributed computing settings.
arXiv Detail & Related papers (2024-03-26T12:12:44Z) - Constrained Reinforcement Learning for Adaptive Controller Synchronization in Distributed SDN [7.277944770202078]
This work focuses on examining deep reinforcement learning (DRL) techniques, encompassing both value-based and policy-based methods, to guarantee an upper latency threshold for AR/VR task offloading.
Our evaluation results indicate that while value-based methods excel in optimizing individual network metrics such as latency or load balancing, policy-based approaches exhibit greater robustness in adapting to sudden network changes or reconfiguration.
arXiv Detail & Related papers (2024-01-21T21:57:22Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Medium Access using Distributed Reinforcement Learning for IoTs with
Low-Complexity Wireless Transceivers [2.6397379133308214]
This paper proposes a distributed Reinforcement Learning (RL) based framework that can be used for MAC layer wireless protocols in IoT networks with low-complexity wireless transceivers.
In this framework, the access protocols are first formulated as Markov Decision Processes (MDP) and then solved using RL.
The paper demonstrates the performance of the learning paradigm and its abilities to make nodes adapt their optimal transmission strategies on the fly in response to various network dynamics.
arXiv Detail & Related papers (2021-04-29T17:57:43Z) - Deep Reinforcement Learning for Adaptive Network Slicing in 5G for
Intelligent Vehicular Systems and Smart Cities [19.723551683930776]
We develop a network slicing model based on a cluster of fog nodes (FNs) coordinated with an edge controller (EC)
For each service request in a cluster, the EC decides which FN to execute the task, locally serve the request at the edge, or to reject the task and refer it to the cloud.
We propose a deep reinforcement learning (DRL) solution to adaptively learn the optimal slicing policy.
arXiv Detail & Related papers (2020-10-19T23:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.