Intelligent Duty Cycling Management and Wake-up for Energy Harvesting IoT Networks with Correlated Activity
- URL: http://arxiv.org/abs/2405.06372v1
- Date: Fri, 10 May 2024 10:16:27 GMT
- Title: Intelligent Duty Cycling Management and Wake-up for Energy Harvesting IoT Networks with Correlated Activity
- Authors: David E. Ruíz-Guirola, Onel L. A. López, Samuel Montejo-Sánchez, Israel Leyva Mayorga, Zhu Han, Petar Popovski,
- Abstract summary: This paper presents an approach for energy-neutral Internet of Things (IoT) scenarios where the IoT devices rely entirely on their energy harvesting capabilities to sustain operation.
We use a Markov chain to represent the operation and transmission states of the IoTDs, a modulated Poisson process to model their energy harvesting process, and a discrete-time Markov chain to model their battery state.
We propose a duty-cycling management based on K- nearest neighbors, aiming to strike a trade-off between energy efficiency and detection accuracy.
- Score: 43.00680041385538
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents an approach for energy-neutral Internet of Things (IoT) scenarios where the IoT devices (IoTDs) rely entirely on their energy harvesting capabilities to sustain operation. We use a Markov chain to represent the operation and transmission states of the IoTDs, a modulated Poisson process to model their energy harvesting process, and a discrete-time Markov chain to model their battery state. The aim is to efficiently manage the duty cycling of the IoTDs, so as to prolong their battery life and reduce instances of low-energy availability. We propose a duty-cycling management based on K- nearest neighbors, aiming to strike a trade-off between energy efficiency and detection accuracy. This is done by incorporating spatial and temporal correlations among IoTDs' activity, as well as their energy harvesting capabilities. We also allow the base station to wake up specific IoTDs if more information about an event is needed upon initial detection. Our proposed scheme shows significant improvements in energy savings and performance, with up to 11 times lower misdetection probability and 50\% lower energy consumption for high-density scenarios compared to a random duty cycling benchmark.
Related papers
- Energy-Aware Dynamic Neural Inference [39.04688735618206]
We introduce an on-device adaptive inference system equipped with an energy-harvester and finite-capacity energy storage.
We show that, as the rate of the ambient energy increases, energy- and confidence-aware control schemes show approximately 5% improvement in accuracy.
We derive a principled policy with theoretical guarantees for confidence-aware and -agnostic controllers.
arXiv Detail & Related papers (2024-11-04T16:51:22Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - Sustainable Edge Intelligence Through Energy-Aware Early Exiting [0.726437825413781]
We propose energy-adaptive dynamic early exiting to enable efficient and accurate inference in an EH edge intelligence system.
Our approach derives an energy-aware EE policy that determines the optimal amount of computational processing on a per-sample basis.
Results show that accuracy and service rate are improved up to 25% and 35%, respectively, in comparison with an energy-agnostic policy.
arXiv Detail & Related papers (2023-05-23T14:17:44Z) - Energy Loss Prediction in IoT Energy Services [0.43012765978447565]
We propose a novel Energy Loss Prediction framework that estimates the energy loss in sharing crowdsourced energy services.
We propose Easeformer, a novel attention-based algorithm to predict the battery levels of IoT devices.
A set of experiments were conducted to demonstrate the feasibility and effectiveness of the proposed framework.
arXiv Detail & Related papers (2023-05-16T09:07:08Z) - tinyMAN: Lightweight Energy Manager using Reinforcement Learning for
Energy Harvesting Wearable IoT Devices [0.0]
Energy harvesting from ambient sources is a promising solution to power low-energy wearable devices.
We present a reinforcement learning-based energy management framework, tinyMAN, for resource-constrained wearable IoT devices.
tinyMAN achieves less than 2.36 ms and 27.75 $mu$J while maintaining up to 45% higher utility compared to prior approaches.
arXiv Detail & Related papers (2022-02-18T16:58:40Z) - Learning, Computing, and Trustworthiness in Intelligent IoT
Environments: Performance-Energy Tradeoffs [62.91362897985057]
An Intelligent IoT Environment (iIoTe) is comprised of heterogeneous devices that can collaboratively execute semi-autonomous IoT applications.
This paper provides a state-of-the-art overview of these technologies and illustrates their functionality and performance, with special attention to the tradeoff among resources, latency, privacy and energy consumption.
arXiv Detail & Related papers (2021-10-04T19:41:42Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.