Empowering Safe Reinforcement Learning for Power System Control with CommonPower
- URL: http://arxiv.org/abs/2406.03231v2
- Date: Tue, 16 Jul 2024 09:48:19 GMT
- Title: Empowering Safe Reinforcement Learning for Power System Control with CommonPower
- Authors: Michael Eichelbeck, Hannah Markgraf, Matthias Althoff,
- Abstract summary: We introduce the Python tool CommonPower, which enables flexible, model-based safeguarding of RL controllers.
CommonPower offers a unified interface for single-agent RL, multi-agent RL, and optimal control, with seamless integration of different forecasting methods.
We demonstrate CommonPower's versatility through a numerical case study that compares RL agents featuring different safeguards with a model predictive controller in the context of building energy management.
- Score: 7.133681867718039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing complexity of power system management has led to an increased interest in reinforcement learning (RL). However, vanilla RL controllers cannot themselves ensure satisfaction of system constraints. Therefore, combining them with formally correct safeguarding mechanisms is an important aspect when studying RL for power system management. Integrating safeguarding into complex use cases requires tool support. To address this need, we introduce the Python tool CommonPower. CommonPower's unique contribution lies in its symbolic modeling approach, which enables flexible, model-based safeguarding of RL controllers. Moreover, CommonPower offers a unified interface for single-agent RL, multi-agent RL, and optimal control, with seamless integration of different forecasting methods. This allows users to validate the effectiveness of safe RL controllers across a large variety of case studies and investigate the influence of specific aspects on overall performance. We demonstrate CommonPower's versatility through a numerical case study that compares RL agents featuring different safeguards with a model predictive controller in the context of building energy management.
Related papers
- Model-based controller assisted domain randomization in deep reinforcement learning: application to nonlinear powertrain control [0.0]
This study proposes a new robust control approach using the framework of deep reinforcement learning (DRL)
The problem setup is modeled via the latent Markov decision process (LMDP), a set of vanilla MDPs, for a controlled system subject to uncertainties and nonlinearities.
Compared to traditional DRL-based controls, the proposed controller design is smarter in that we can achieve a high level of generalization ability.
arXiv Detail & Related papers (2025-04-28T12:09:07Z) - SINERGYM -- A virtual testbed for building energy optimization with Reinforcement Learning [40.71019623757305]
This paper presents Sinergym, an open-source Python-based virtual testbed for large-scale building simulation, data collection, continuous control, and experiment monitoring.
arXiv Detail & Related papers (2024-12-11T11:09:13Z) - Large Language Model-Enhanced Reinforcement Learning for Generic Bus Holding Control Strategies [12.599164162404994]
This study introduces an automatic reward generation paradigm by leveraging the in-context learning and reasoning capabilities of Large Language Models (LLMs)
To evaluate the feasibility of the proposed LLM-enhanced RL paradigm, it is applied to various bus holding control scenarios, including a synthetic single-line system and a real-world multi-line system.
arXiv Detail & Related papers (2024-10-14T07:10:16Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - Real-World Implementation of Reinforcement Learning Based Energy
Coordination for a Cluster of Households [3.901860248668672]
We present a real-life pilot study that studies the effectiveness of reinforcement-learning (RL) in coordinating the power consumption of 8 residential buildings to jointly track a target power signal.
Our results demonstrate satisfactory power tracking, and the effectiveness of the RL-based ranks which are learnt in a purely data-driven manner.
arXiv Detail & Related papers (2023-10-29T21:10:38Z) - RL + Model-based Control: Using On-demand Optimal Control to Learn Versatile Legged Locomotion [16.800984476447624]
This paper presents a control framework that combines model-based optimal control and reinforcement learning.
We validate the robustness and controllability of the framework through a series of experiments.
Our framework effortlessly supports the training of control policies for robots with diverse dimensions.
arXiv Detail & Related papers (2023-05-29T01:33:55Z) - Energy Management of Multi-mode Plug-in Hybrid Electric Vehicle using
Multi-agent Deep Reinforcement Learning [6.519522573636577]
Multi-mode plug-in hybrid electric vehicle (PHEV) technology is one of the pathways making contributions to decarbonization.
This paper studies a multi-agent deep reinforcement learning (MADRL) control method for energy management of the multi-mode PHEV.
Using the unified DDPG settings and a relevance ratio of 0.2, the proposed MADRL system can save up to 4% energy compared to the single-agent learning system and up to 23.54% energy compared to the conventional rule-based system.
arXiv Detail & Related papers (2023-03-16T21:31:55Z) - A Dynamic Feedforward Control Strategy for Energy-efficient Building
System Operation [59.56144813928478]
In current control strategies and optimization algorithms, most of them rely on receiving information from real-time feedback.
We propose an engineer-friendly control strategy framework that embeds dynamic prior knowledge from building system characteristics simultaneously for system control.
We tested it in a case for heating system control with typical control strategies, which shows our framework owns a further energy-saving potential of 15%.
arXiv Detail & Related papers (2023-01-23T09:07:07Z) - Flexible Attention-Based Multi-Policy Fusion for Efficient Deep
Reinforcement Learning [78.31888150539258]
Reinforcement learning (RL) agents have long sought to approach the efficiency of human learning.
Prior studies in RL have incorporated external knowledge policies to help agents improve sample efficiency.
We present Knowledge-Grounded RL (KGRL), an RL paradigm fusing multiple knowledge policies and aiming for human-like efficiency and flexibility.
arXiv Detail & Related papers (2022-10-07T17:56:57Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Skip Training for Multi-Agent Reinforcement Learning Controller for
Industrial Wave Energy Converters [94.84709449845352]
Recent Wave Energy Converters (WEC) are equipped with multiple legs and generators to maximize energy generation.
Traditional controllers have shown limitations to capture complex wave patterns and the controllers must efficiently maximize the energy capture.
This paper introduces a Multi-Agent Reinforcement Learning controller (MARL), which outperforms the traditionally used spring damper controller.
arXiv Detail & Related papers (2022-09-13T00:20:31Z) - Deep Reinforcement Learning with Shallow Controllers: An Experimental
Application to PID Tuning [3.9146761527401424]
We demonstrate the challenges in implementing a state of the art deep RL algorithm on a real physical system.
At the core of our approach is the use of a PID controller as the trainable RL policy.
arXiv Detail & Related papers (2021-11-13T18:48:28Z) - Distributional Reinforcement Learning for Multi-Dimensional Reward
Functions [91.88969237680669]
We introduce Multi-Dimensional Distributional DQN (MD3QN) to model the joint return distribution from multiple reward sources.
As a by-product of joint distribution modeling, MD3QN can capture the randomness in returns for each source of reward.
In experiments, our method accurately models the joint return distribution in environments with richly correlated reward functions.
arXiv Detail & Related papers (2021-10-26T11:24:23Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Reinforcement Learning as One Big Sequence Modeling Problem [84.84564880157149]
Reinforcement learning (RL) is typically concerned with estimating single-step policies or single-step models.
We view RL as a sequence modeling problem, with the goal being to predict a sequence of actions that leads to a sequence of high rewards.
arXiv Detail & Related papers (2021-06-03T17:58:51Z) - RL-Controller: a reinforcement learning framework for active structural
control [0.0]
We present a novel RL-based approach for designing active controllers by introducing RL-Controller, a flexible and scalable simulation environment.
We show that the proposed framework is easily trainable for a five story benchmark building with 65% reductions on average in inter story drifts.
In a comparative study with LQG active control method, we demonstrate that the proposed model-free algorithm learns more optimal actuator forcing strategies.
arXiv Detail & Related papers (2021-03-13T04:42:13Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.