Reinforcement Learning for Thermostatically Controlled Loads Control
using Modelica and Python
- URL: http://arxiv.org/abs/2005.04444v1
- Date: Sat, 9 May 2020 13:35:49 GMT
- Title: Reinforcement Learning for Thermostatically Controlled Loads Control
using Modelica and Python
- Authors: Oleh Lukianykhin, Tetiana Bogodorova
- Abstract summary: The project aims to investigate and assess opportunities for applying reinforcement learning (RL) for power system control.
As a proof of concept (PoC), voltage control of thermostatically controlled loads (TCLs) for power consumption was developed using Modelica-based pipeline.
The paper shows the influence of Q-learning parameters, including discretization of state-action space, on the controller performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The aim of the project is to investigate and assess opportunities for
applying reinforcement learning (RL) for power system control. As a proof of
concept (PoC), voltage control of thermostatically controlled loads (TCLs) for
power consumption regulation was developed using Modelica-based pipeline. The
Q-learning RL algorithm has been validated for deterministic and stochastic
initialization of TCLs. The latter modelling is closer to real grid behaviour,
which challenges the control development, considering the stochastic nature of
load switching. In addition, the paper shows the influence of Q-learning
parameters, including discretization of state-action space, on the controller
performance.
Related papers
- Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - A Safe Reinforcement Learning Algorithm for Supervisory Control of Power
Plants [7.1771300511732585]
Model-free Reinforcement learning (RL) has emerged as a promising solution for control tasks.
We propose a chance-constrained RL algorithm based on Proximal Policy Optimization for supervisory control.
Our approach achieves the smallest distance of violation and violation rate in a load-follow maneuver for an advanced Nuclear Power Plant design.
arXiv Detail & Related papers (2024-01-23T17:52:49Z) - An experimental evaluation of Deep Reinforcement Learning algorithms for HVAC control [40.71019623757305]
Recent studies have shown that Deep Reinforcement Learning (DRL) algorithms can outperform traditional reactive controllers.
This paper provides a critical and reproducible evaluation of several state-of-the-art DRL algorithms for HVAC control.
arXiv Detail & Related papers (2024-01-11T08:40:26Z) - Steady-State Error Compensation in Reference Tracking and Disturbance
Rejection Problems for Reinforcement Learning-Based Control [0.9023847175654602]
Reinforcement learning (RL) is a promising, upcoming topic in automatic control applications.
Initiative action state augmentation (IASA) for actor-critic-based RL controllers is introduced.
This augmentation does not require any expert knowledge, leaving the approach model free.
arXiv Detail & Related papers (2022-01-31T16:29:19Z) - RL-Controller: a reinforcement learning framework for active structural
control [0.0]
We present a novel RL-based approach for designing active controllers by introducing RL-Controller, a flexible and scalable simulation environment.
We show that the proposed framework is easily trainable for a five story benchmark building with 65% reductions on average in inter story drifts.
In a comparative study with LQG active control method, we demonstrate that the proposed model-free algorithm learns more optimal actuator forcing strategies.
arXiv Detail & Related papers (2021-03-13T04:42:13Z) - Regularizing Action Policies for Smooth Control with Reinforcement
Learning [47.312768123967025]
Conditioning for Action Policy Smoothness (CAPS) is an effective yet intuitive regularization on action policies.
CAPS offers consistent improvement in the smoothness of the learned state-to-action mappings of neural network controllers.
Tested on a real system, improvements in controller smoothness on a quadrotor drone resulted in an almost 80% reduction in power consumption.
arXiv Detail & Related papers (2020-12-11T21:35:24Z) - Anticipating the Long-Term Effect of Online Learning in Control [75.6527644813815]
AntLer is a design algorithm for learning-based control laws that anticipates learning.
We show that AntLer approximates an optimal solution arbitrarily accurately with probability one.
arXiv Detail & Related papers (2020-07-24T07:00:14Z) - Data-Driven Learning and Load Ensemble Control [1.647866856596524]
This study aims to engage distributed small-scale flexible loads, such as thermostatically controllable loads (TCLs) to provide grid support services.
The efficiency of this data-driven learning is demonstrated through simulations on Heating, Cooling & Ventilation units in a testbed neighborhood of residential houses.
arXiv Detail & Related papers (2020-04-20T23:32:10Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.