Deep Reinforcement Learning for Optimizing Energy Consumption in Smart Grid Systems
- URL: http://arxiv.org/abs/2602.18531v1
- Date: Fri, 20 Feb 2026 07:52:05 GMT
- Title: Deep Reinforcement Learning for Optimizing Energy Consumption in Smart Grid Systems
- Authors: Abeer Alsheikhi, Amirfarhad Farhadi, Azadeh Zamanifar,
- Abstract summary: Reinforcement Learning (RL) has been proposed for solving Optimal Power Flow (OPF) problems.<n>This study uses Physics-Informed Neural Networks (PINNs) to replace conventional and costly smart grid simulators.<n>Results show that using PINN surrogates can accelerate training by 50% compared to RL training without a surrogate.
- Score: 0.9558392439655014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The energy management problem in the context of smart grids is inherently complex due to the interdependencies among diverse system components. Although Reinforcement Learning (RL) has been proposed for solving Optimal Power Flow (OPF) problems, the requirement for iterative interaction with an environment often necessitates computationally expensive simulators, leading to significant sample inefficiency. In this study, these challenges are addressed through the use of Physics-Informed Neural Networks (PINNs), which can replace conventional and costly smart grid simulators. The RL policy learning process is enhanced so that convergence can be achieved in a fraction of the time required by the original environment. The PINN-based surrogate is compared with other benchmark data-driven surrogate models. By incorporating knowledge of the underlying physical laws, the results show that the PINN surrogate is the only approach considered in this context that can obtain a strong RL policy even without access to samples from the true simulator. The results demonstrate that using PINN surrogates can accelerate training by 50% compared to RL training without a surrogate. This approach enables the rapid generation of performance scores similar to those produced by the original simulator.
Related papers
- Scientific Knowledge-Guided Machine Learning for Vessel Power Prediction: A Comparative Study [0.0]
This study introduces a hybrid modeling framework that integrates physics-based knowledge from sea trials with data-driven residual learning.<n>The proposed framework provides a practical and computationally efficient tool for vessel performance monitoring, with applications in weather routing, trim optimization, and energy efficiency planning.
arXiv Detail & Related papers (2026-02-20T18:12:14Z) - Optimizing Energy Management of Smart Grid using Reinforcement Learning aided by Surrogate models built using Physics-informed Neural Networks [29.49941497527361]
Reinforcement Learning (RL) is gaining prominence as a solution for addressing the challenges of Optimal Power Flow in smart grids.<n>We address this problem by substituting costly smart grid simulators with surrogate models built using Phisics-informed Neural Networks (PINNs)
arXiv Detail & Related papers (2025-10-20T10:17:42Z) - Surrogate-Assisted Evolutionary Reinforcement Learning Based on Autoencoder and Hyperbolic Neural Network [17.24129493200616]
This paper proposes a novel surrogate-assisted ERL that integrates Autoencoders (AE) and Hyperbolic Neural Networks (HNN)<n>AE compresses high-dimensional policies into low-dimensional representations while extracting key features as the inputs for the surrogate.<n> experiments on 10 Atari and 4 Mujoco games have verified that the proposed method outperforms previous approaches significantly.
arXiv Detail & Related papers (2025-05-26T02:25:17Z) - Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning [67.95280175998792]
A novel adversarial imitation learning (GAIL)-powered policy learning approach is proposed for optimizing beamforming, spectrum allocation, and remote user equipment (RUE) association ins.
We employ inverse RL (IRL) to automatically learn reward functions without manual tuning.
We show that the proposed MA-AL method outperforms traditional RL approaches, achieving a $14.6%$ improvement in convergence and reward value.
arXiv Detail & Related papers (2024-09-27T13:05:02Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Energy-Efficient and Real-Time Sensing for Federated Continual Learning via Sample-Driven Control [21.871879862642235]
Real-Time Sensing (RTS) systems must continuously acquire, update, integrate, and apply knowledge to adapt to real-world dynamics.<n>We investigate how the data distribution shift from ideal to practical RTS scenarios affects Artificial Intelligence (AI) model performance.<n>We develop a novel Sample-driven Control for Federated Continual Learning (SCFL) technique, specifically designed for mobile edge networks with RTS capabilities.
arXiv Detail & Related papers (2023-10-11T13:50:28Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Efficient Learning of Voltage Control Strategies via Model-based Deep
Reinforcement Learning [9.936452412191326]
This article proposes a model-based deep reinforcement learning (DRL) method to design emergency control strategies for short-term voltage stability problems in power systems.
Recent advances show promising results in model-free DRL-based methods for power systems, but model-free methods suffer from poor sample efficiency and training time.
We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model is utilized with the policy learning framework.
arXiv Detail & Related papers (2022-12-06T02:50:53Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z) - Model-Driven Beamforming Neural Networks [47.754731555563836]
This article introduces general data- and model-driven beamforming neural networks (BNNs)
It presents various possible learning strategies, and also discusses complexity reduction for the DL-based BNNs.
We also offer enhancement methods such as training-set augmentation and transfer learning in order to improve the generality of BNNs.
arXiv Detail & Related papers (2020-01-15T12:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.