LEACH-RLC: Enhancing IoT Data Transmission with Optimized Clustering and Reinforcement Learning
- URL: http://arxiv.org/abs/2401.15767v2
- Date: Fri, 14 Mar 2025 08:36:09 GMT
- Title: LEACH-RLC: Enhancing IoT Data Transmission with Optimized Clustering and Reinforcement Learning
- Authors: F. Fernando Jurado-Lasso, J. F. Jurado, Xenofon Fafoutis,
- Abstract summary: This paper introduces Low-Energy Adaptive Clustering with Reinforcement Learning-based Controller (LEACH-RLC)<n>MILP approach for strategic selection of Cluster Heads (CHs) and node-to-cluster assignments.<n>RL agent to minimize control overhead by learning optimal timings for generating new clusters.<n>Results demonstrate superior performance of LEACH-RLC over state-of-the-art protocols, showcasing enhanced network lifetime, reduced average energy consumption, and minimized control overhead.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wireless Sensor Networks (WSNs) play a pivotal role in enabling Internet of Things (IoT) devices with sensing and actuation capabilities. Operating in remote and resource-constrained environments, these IoT devices face challenges related to energy consumption, crucial for network longevity. Existing clustering protocols often suffer from high control overhead, inefficient cluster formation, and poor adaptability to dynamic network conditions, leading to suboptimal data transmission and reduced network lifetime. This paper introduces Low-Energy Adaptive Clustering Hierarchy with Reinforcement Learning-based Controller (LEACH-RLC), a novel clustering protocol designed to address these limitations by employing a Mixed Integer Linear Programming (MILP) approach for strategic selection of Cluster Heads (CHs) and node-to-cluster assignments. Additionally, it integrates a Reinforcement Learning (RL) agent to minimize control overhead by learning optimal timings for generating new clusters. LEACH-RLC aims to balance control overhead reduction without compromising overall network performance. Through extensive simulations, this paper investigates the frequency and opportune moments for generating new clustering solutions. Results demonstrate the superior performance of LEACH-RLC over state-of-the-art protocols, showcasing enhanced network lifetime, reduced average energy consumption, and minimized control overhead. The proposed protocol contributes to advancing the efficiency and adaptability of WSNs, addressing critical challenges in IoT deployments.
Related papers
- Hierarchical Multi-Agent DRL Based Dynamic Cluster Reconfiguration for UAV Mobility Management [46.80160709931929]
Multi-connectivity involves dynamic cluster formation among distributed access points (APs) and coordinated resource allocation from these APs.
We propose a novel mobility management scheme for unmanned aerial vehicles (UAVs) that uses dynamic cluster reconfiguration with energy-efficient power allocation.
arXiv Detail & Related papers (2024-12-05T19:20:42Z) - Latency Optimization in LEO Satellite Communications with Hybrid Beam Pattern and Interference Control [20.19239663262141]
Low Earth orbit (LEO) satellite communication systems offer high-capacity, low-latency services crucial for next-generation applications.
The dense configuration of LEO constellations poses challenges in resource allocation optimization and interference management.
This paper proposes a novel framework for optimizing the beam scheduling and resource allocation in multi-beam LEO systems.
arXiv Detail & Related papers (2024-11-14T17:18:24Z) - SCALE: Self-regulated Clustered federAted LEarning in a Homogeneous Environment [4.925906256430176]
Federated Learning (FL) has emerged as a transformative approach for enabling distributed machine learning while preserving user privacy.
This paper presents a novel FL methodology that overcomes these limitations by eliminating the dependency on edge servers.
arXiv Detail & Related papers (2024-07-25T20:42:16Z) - Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach [51.63921041249406]
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)
deploying STAR-RIS indoors presents challenges in interference mitigation, power consumption, and real-time configuration.
A novel network architecture utilizing multiple access points (APs), STAR-RISs, and NOMA is proposed for indoor communication.
arXiv Detail & Related papers (2024-06-19T07:17:04Z) - PeersimGym: An Environment for Solving the Task Offloading Problem with Reinforcement Learning [2.0249250133493195]
We introduce PeersimGym, an open-source, customizable simulation environment tailored for developing and optimizing task offloading strategies within computational networks.
PeersimGym supports a wide range of network topologies and computational constraints and integrates a textitPettingZoo-based interface for RL agent deployment in both solo and multi-agent setups.
We demonstrate the utility of the environment through experiments with Deep Reinforcement Learning agents, showcasing the potential of RL-based approaches to significantly enhance offloading strategies in distributed computing settings.
arXiv Detail & Related papers (2024-03-26T12:12:44Z) - Constrained Reinforcement Learning for Adaptive Controller Synchronization in Distributed SDN [7.277944770202078]
This work focuses on examining deep reinforcement learning (DRL) techniques, encompassing both value-based and policy-based methods, to guarantee an upper latency threshold for AR/VR task offloading.
Our evaluation results indicate that while value-based methods excel in optimizing individual network metrics such as latency or load balancing, policy-based approaches exhibit greater robustness in adapting to sudden network changes or reconfiguration.
arXiv Detail & Related papers (2024-01-21T21:57:22Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Efficient Parallel Split Learning over Resource-constrained Wireless
Edge Networks [44.37047471448793]
In this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL)
We propose an innovative PSL framework, namely, efficient parallel split learning (EPSL) to accelerate model training.
We show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy.
arXiv Detail & Related papers (2023-03-26T16:09:48Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Medium Access using Distributed Reinforcement Learning for IoTs with
Low-Complexity Wireless Transceivers [2.6397379133308214]
This paper proposes a distributed Reinforcement Learning (RL) based framework that can be used for MAC layer wireless protocols in IoT networks with low-complexity wireless transceivers.
In this framework, the access protocols are first formulated as Markov Decision Processes (MDP) and then solved using RL.
The paper demonstrates the performance of the learning paradigm and its abilities to make nodes adapt their optimal transmission strategies on the fly in response to various network dynamics.
arXiv Detail & Related papers (2021-04-29T17:57:43Z) - Deep Reinforcement Learning for Adaptive Network Slicing in 5G for
Intelligent Vehicular Systems and Smart Cities [19.723551683930776]
We develop a network slicing model based on a cluster of fog nodes (FNs) coordinated with an edge controller (EC)
For each service request in a cluster, the EC decides which FN to execute the task, locally serve the request at the edge, or to reject the task and refer it to the cloud.
We propose a deep reinforcement learning (DRL) solution to adaptively learn the optimal slicing policy.
arXiv Detail & Related papers (2020-10-19T23:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.