Active management of battery degradation in wireless sensor network using deep reinforcement learning for group battery replacement
- URL: http://arxiv.org/abs/2503.15865v2
- Date: Sat, 22 Mar 2025 20:21:34 GMT
- Title: Active management of battery degradation in wireless sensor network using deep reinforcement learning for group battery replacement
- Authors: Jong-Hyun Jeong, Hongki Jo, Qiang Zhou, Tahsin Afroz Hoque Nishat, Lang Wu,
- Abstract summary: Wireless sensor networks (WSNs) have become a promising solution for structural health monitoring (SHM)<n>Battery-powered WSNs offer various advantages over wired systems, however limited battery life has always been one of the biggest obstacles in practical use of the WSNs.<n>This study investigate a deep reinforcement learning (DRL) method for active battery degradation management by optimizing duty cycle of WSNs at the system level.
- Score: 4.469172054222021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wireless sensor networks (WSNs) have become a promising solution for structural health monitoring (SHM), especially in hard-to-reach or remote locations. Battery-powered WSNs offer various advantages over wired systems, however limited battery life has always been one of the biggest obstacles in practical use of the WSNs, regardless of energy harvesting methods. While various methods have been studied for battery health management, existing methods exclusively aim to extend lifetime of individual batteries, lacking a system level view. A consequence of applying such methods is that batteries in a WSN tend to fail at different times, posing significant difficulty on planning and scheduling of battery replacement trip. This study investigate a deep reinforcement learning (DRL) method for active battery degradation management by optimizing duty cycle of WSNs at the system level. This active management strategy effectively reduces earlier failure of battery individuals which enable group replacement without sacrificing WSN performances. A simulated environment based on a real-world WSN setup was developed to train a DRL agent and learn optimal duty cycle strategies. The performance of the strategy was validated in a long-term setup with various network sizes, demonstrating its efficiency and scalability.
Related papers
- Optimizing the Charging of Open Quantum Batteries using Long Short-Term Memory-Driven Reinforcement Learning [0.0]
We study the charging process of a quantum battery in an open quantum setting, where the battery interacts with a charger and a structured reservoir.
A reinforcement learning (RL) charging strategy is proposed, which utilizes the deep deterministic policy gradient algorithm alongside long short-term memory (LSTM) networks.
The RL protocols consistently outperform conventional fixed strategies by real-time controlling the driving field amplitude and coupling parameters.
arXiv Detail & Related papers (2025-04-28T14:40:11Z) - Deep Reinforcement Learning-Based Optimization of Second-Life Battery Utilization in Electric Vehicles Charging Stations [0.5033155053523042]
This paper presents a deep reinforcement learning-based (DRL) planning framework for EV charging stations with BESS, leveraging SLBs.<n>We employ the advanced soft actor-critic (SAC) approach, training the model on a year's worth of data to account for seasonal variations.<n>A tailored reward function enables effective offline training, allowing real-time optimization of EVCS operations under uncertainty.
arXiv Detail & Related papers (2025-02-05T17:50:53Z) - Deep-MPC: A DAGGER-Driven Imitation Learning Strategy for Optimal Constrained Battery Charging [5.192596329990163]
This manuscript introduces an innovative solution to confront the inherent challenges associated with conventional predictive control strategies for constrained battery charging.
Results drawn from a practical battery simulator that incorporates an electrochemical model highlight substantial improvements in battery charging performance.
arXiv Detail & Related papers (2024-06-23T02:36:02Z) - Deep Reinforcement Learning for Community Battery Scheduling under
Uncertainties of Load, PV Generation, and Energy Prices [5.694872363688119]
This paper presents a deep reinforcement learning (RL) strategy to schedule a community battery system in the presence of uncertainties.
We position the community battery to play a versatile role, in integrating local PV energy, reducing peak load, and exploiting energy price fluctuations for arbitrage.
arXiv Detail & Related papers (2023-12-04T13:45:17Z) - Remaining useful life prediction of Lithium-ion batteries using spatio-temporal multimodal attention networks [4.249657064343807]
Lithium-ion batteries are widely used in various applications, including electric vehicles and renewable energy storage.
The prediction of the remaining useful life (RUL) of batteries is crucial for ensuring reliable and efficient operation.
This paper proposes a two-stage RUL prediction scheme for Lithium-ion batteries using a-temporal attention network (ST-MAN)
arXiv Detail & Related papers (2023-10-29T07:32:32Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Pervasive Machine Learning for Smart Radio Environments Enabled by
Reconfigurable Intelligent Surfaces [56.35676570414731]
The emerging technology of Reconfigurable Intelligent Surfaces (RISs) is provisioned as an enabler of smart wireless environments.
RISs offer a highly scalable, low-cost, hardware-efficient, and almost energy-neutral solution for dynamic control of the propagation of electromagnetic signals over the wireless medium.
One of the major challenges with the envisioned dense deployment of RISs in such reconfigurable radio environments is the efficient configuration of multiple metasurfaces.
arXiv Detail & Related papers (2022-05-08T06:21:33Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Optimizing a domestic battery and solar photovoltaic system with deep
reinforcement learning [69.68068088508505]
A lowering in the cost of batteries and solar PV systems has led to a high uptake of solar battery home systems.
In this work, we use the deep deterministic policy algorithm to optimise the charging and discharging behaviour of a battery within such a system.
arXiv Detail & Related papers (2021-09-10T10:59:14Z) - Mobile Cellular-Connected UAVs: Reinforcement Learning for Sky Limits [71.28712804110974]
We propose a general novel multi-armed bandit (MAB) algorithm to reduce disconnectivity time, handover rate, and energy consumption of UAV.
We show how each of these performance indicators (PIs) is improved by adopting a proper range of corresponding learning parameter.
arXiv Detail & Related papers (2020-09-21T12:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.